Eric Hartford, the author of dolphin models, released dolphin-2.2-yi-34b.
This is one of the earliest community finetunes of the yi-34B.
yi-34B was developed by a Chinese company and they claim sota performance that are on par with gpt-3.5
HF: https://huggingface.co/ehartford/dolphin-2_2-yi-34b
Announcement: https://x.com/erhartford/status/1723940171991663088?s=20
16k context is awesome. Now we need Goliath 120b with 16k context and I’m done with OpenAI
Is Goliath that good? Is it that better than all of the Llama2-70B tunes that’s worth the hardware investments needed for running it?