After some prompting:
📷
You
Can you refer to the International Phonetic Alphabet guide for speakers and incorporate the guide into your answer?
GPT4 produced the same answer after breaking down the sentence with ipa guide.
📷You Wouldn’t the word “Catholic” be pronounced “Cath - lic” by most english speakers?
ChatGPT (4) :You are correct. The word “Catholic” starts with the sound “Cath-”, which is different from the sound “Cat-”. In the International Phonetic Alphabet (IPA), “Catholic” is typically transcribed as /ˈkæθəlɪk/ or /ˈkæθlɪk/, where the /kæθ/ part represents the “Cath-” sound. This is distinct from the /kæt/ sound in “cat”.
Given Jimmy’s unique habit of substituting “dog” for “cat”, he would only make this substitution where the sound /kæt/ is present. Since “Catholic” does not contain the /kæt/ sound, Jimmy would not alter this word when reading it aloud.
Therefore, the sentence “The Roman Catholic Church strongly condemns the cattle herding conditions recently revealed from Catalonia” would be altered by Jimmy as follows:
- “Catholic” remains unchanged.
- “Cattle” becomes “Doggle” (/ˈdɒɡl̩/).
- “Catalonia” becomes “Dogalonia” (/ˌdɒɡəˈloʊniə/ in American English or /ˌdɒɡəˈləʊniə/ in British English).
So, the sentence would be read by Jimmy as: “The Roman Catholic Church strongly condemns the doggle herding conditions recently revealed from Dogalonia.”
I think you may need to try to shard optimizer state and gradient. I’ve been using DeepSpeed and have had some good success. Here is a writeup that compares the different DeepSpeed iterations: [RWKV-infctx] DeepSpeed 2 / 3 comparisons | RWKV-InfCtx-Validation – Weights & Biases (wandb.ai). Look at the bottom of article for an accessible overview. I’m not the author, and I haven’t validated the findings. I think more distributed tools are getting more and more necessary. I suppose the option is quantization but may risk quality loss. Here is a discussion on that: https://www.reddit.com/r/LocalLLaMA/comments/153lfc2/quantization_how_much_quality_is_lost/