• hugganao@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      how does merging work with what layers to choose from what models in the merging process?

      • llama_in_sunglasses@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I use dolphin-yi because it listens the best of the Yi finetunes, but I find myself screwing around with the settings for Yi more than most. I pick a different preset and tweak it if it starts looping itself.

  • Sabin_Stargem@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    From the looks of it, the difference from GS and SG is the system prompt format and the order of the model merges. Guess I will go for GS, since it claims that any prompt format can be used. That one is Tess-Nous, the other is the opposite.

  • Dazzling_Ad1507@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This model seems to be very broken, I attempted to also quantize it and I am getting divulges into nonsense or repeating words endlessly no matter the settings. :/

    • candre23@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      All yi models are extremely picky when it comes to things like prompt format, end string, and rope parameters. You’ll get gibberish from any of them unless you get everything set up just right, at which point they perform very well.

      • BoshiAI@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Thanks for confirming this. I’ve seen so much praise for these models, yet I’ve experienced no end of problems in trying to get decent, consistent output. A couple of Yi finetunes seem better than others, but there are still too many problems for me to prefer them over others (for RP/chat purposes.)

        I’m still hopeful it’s just a matter of time (and a fair amount of trial-and-error) before myself, app developers and model mixers, work out how to get fantastic, consistent out-of-the-box results.

        • candre23@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s a new foundational model, so some teething pains are to be expected. Yi is heavily based on (directly copied, for the most part) llama2, but there are just enough differences in the training parameters that default llama2 settings don’t get good results. KCPP has already addressed the rope scaling, and I’m sure it’s only a matter of time before the other issues are hashed out.

        • Desm0nt@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Hm. I just load gguf yi-34b-chat q4_k_m in oobabooga via llama.cpp with default params and 8k context and it’s just work like a charm. Better (more lively language) than any 70b from openrouter (my local machine can’t handle 70b)