• IxinDow@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How many tokens in your substack example?
    Do you have examples of using model for fiction with length 16K-40K tokens?

  • llama_in_sunglasses@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Thanks for the model, it’s really nice to have some synthia magic on a Yi-34B 200K base.

    Part of the generation from your suggested prompt:

    The magnetic field of our planet is generated by an iron-nickel core that rotates like a dynamo, creating electric currents which in turn produce the magnetic force we experience as compass needles pointing northward when held still relative to this field’s direction over time periods measured in years rather than seconds or minutes because it varies slightly due to solar wind interactions with upper layers known collectively as “ionosphere.”

    I found this particular output unintentionally hilarious because it reminds me a lot of the reddit comments I type out then delete because it’s just some overexplainy run-on gibberish.

  • mcmoose1900@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Almost the same syntax as Yi Capybara. Excellent.

    I propose all Yi 34B 200K finetunes use Vincuna-ish prompt syntax, so they can ALL be merged into one hellish voltron model.

      • SomeOddCodeGuy@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Just wanted to come back and let you know I started using this last night, and this is fantastic. I haven’t put it through much testing yet, but just know that on initial use I’m very impressed by this model for general purpose AI assistant. It’s keeping to the Assistant’s more informal speech patterns while also answering questions well and keeping up with large context. Those are 3 checkboxes I’ve never been able to check at once. This praise wont’ get much visibility since it’s an older thread, but just wanted to let you know at least.

  • mcmoose1900@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    More random feedback: you should put some combination of Yi, 34B, and or 200K in the title.

    No one tags anything on HF, so the only way to browse models is by title. I would have totally missed this in my Yi/34B searches if not for the Reddit post.

    • Sabin_Stargem@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, it was only by luck that I stumbled onto this. Something like “Yi-34b-200k - Tess Medium” would work better.

  • CasimirsBlake@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Tell me I’m going to need another GPU without telling me I’m going to need another GPU… Eeek.

    • Sabin_Stargem@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      When I built my gaming rig, I thought that I wouldn’t need to update for several years. Then a AI came along and kicked my sandcastle into the surf.

      My wallet is unhappy, and has already lost inches from the diet it has been put on.

  • YearZero@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Testing it now, but it’s worse than 7b models on logic questions for me. Huge disappointment compared to Dolphin and Nous-Capybara, both Yi finetunes and are the best models I’ve tested so far. It just goes to show you how much difference finetuning a base model can make.

  • migtissera@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Just on another note, this place is just super hostile! I didn’t think it would be, considering it’s the LocalLLaMA sub-reddit and we are all here to support open source or freely available models.

    This is harsher than the Twitter mob!

    I’ll still release models, but sorry guys, not coming here again.

    • llama_in_sunglasses@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Sorry to hear that. This thread is pretty wild, almost every other model thread on LocalLlama has at most a few crazies and they get downvoted. Your Synthia models are fairly popular, so the reactions you got seems pretty out of place to me.

  • sophosympatheia@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This model kicks ass. I strongly recommend trying it for roleplay. The 4-bit 32g act order GPTQ quant is on par with 70b models, so I can only imagine what higher-bit quants can do.

  • bespoke-mushroom@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Read through the substack “conversation” with Tess. Obviously Tess is so good that it reveals a strange symmetry…

    …A mathematical model (Tess) gives a seemingly coherent English language response, describing a seemingly coherent branch of theoretical physics, which after reading turns out to be nothing but mathematical gibberish.

    Thank heavens civil engineers do not use phrases like “These infinities are due to the fact that particles can emit or absorb infinitely many virtual particles. Renormalization allows us to make sense of these infinities”

    Thanks for your work on Tess, I am sure it can be used for either actual science, or even greater fantasy than QFT.