Yi is a series of LLMs trained from scratch at 01.AI. The models have the same architecture of Llama, making them compatible with all the llama-based ecosystems. Just in November, they released

  • Base 6B and 34B models
  • Models with extended context of up to 200k tokens
  • Today, the Chat models

With the release, they are also releasing 4-bit quantized by AWQ and 8-bit quantized by GPTQ

Things to consider:

  • Llama compatible format, so you can use across a bunch of tools
  • License is not commercial unfortunately, but you can request commercial use and they are quite responsive
  • 34B is an amazing model size for consumer GPUs
  • Yi-34B is at the top of the OS Leaderboard, making it a very strong base model for a chat one
    • reddithotel@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I cannot load that one :(. Dolphin does work for me, but I cannot change the output writing style.

      • a_beautiful_rhind@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sucks, all the ones I d/l work so far but I’m using exl2.

        Those are actually 2 different 34b chat models but there is a merge of them, nous-tess. They were the first that came to mind. If you search 34b there are others.

    • azriel777@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      For whatever reason, I keep getting memory errors with nous, but can run yi 34b fine. No idea what is wrong.