Yi is a series of LLMs trained from scratch at 01.AI. The models have the same architecture of Llama, making them compatible with all the llama-based ecosystems. Just in November, they released
- Base 6B and 34B models
- Models with extended context of up to 200k tokens
- Today, the Chat models
With the release, they are also releasing 4-bit quantized by AWQ and 8-bit quantized by GPTQ
- Chat model - https://huggingface.co/01-ai/Yi-34B-Chat
- Demo to try it out - https://huggingface.co/spaces/01-ai/Yi-34B-Chat
Things to consider:
- Llama compatible format, so you can use across a bunch of tools
- License is not commercial unfortunately, but you can request commercial use and they are quite responsive
- 34B is an amazing model size for consumer GPUs
- Yi-34B is at the top of the OS Leaderboard, making it a very strong base model for a chat one
nous-capybara, tess
I cannot load that one :(. Dolphin does work for me, but I cannot change the output writing style.
Sucks, all the ones I d/l work so far but I’m using exl2.
Those are actually 2 different 34b chat models but there is a merge of them, nous-tess. They were the first that came to mind. If you search 34b there are others.
For whatever reason, I keep getting memory errors with nous, but can run yi 34b fine. No idea what is wrong.