Hi everyone, I’d like to share something that I’ve been working on for the past few days: https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0

This model is the result of interleaving layers from three different models: Euryale-1.3-L2-70B, Nous-Hermes-Llama2-70b, and SynthIA-70B-v1.5, resulting in a model that it larger than any of the three used for the merge. I have branches on the repo for exl2 quants at 3.0 and 4.85 bpw, which will allow the model to run in 48GB or 80GB of vram, respectively.

I love using LLMs for RPs and ERPs and so my goal was to create something similar to Goliath, which is honestly the best roleplay model I’ve ever used. I’ve done some initial testing with it and so far the results seem encouraging. I’d love to get some feedback on this from the community! Going forward, my plan is to do more experiments with merging models together, possibly even going even larger than 120b parameters to see where the gains stop.

  • a_beautiful_rhind@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hell yea! No Xwin. I hate that model. I’m down for the 3 bit. I didn’t like tess-XL so far so hopefully you made a david here.

  • uti24@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Oh we definably need GGUF variant of this model, I love Goliat-120B (I event think it might be better that Falcon-180B) and would love to run this model.

  • CryptoSpecialAgent@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    We need a benchmark specifically for nsfw content generation. Because I have a theory that I think I may try and prove: nsfw content, at least of a textual nature, can hold it’s own against human authors even with a 7b model…

    Rwkv for example. Its just a toy for most things. But give it a couple lines of erotica and it will spit out high quality smut until its context runs out.

    My theory is that the internet is full of erotic text content, and that such content exhibits less variety between outputs than other kinds of written material. Together, this ensures that even an underpowered half assed llm will likely be capable of creating half decent porn texts assuming it is uncensored

    Would love to see some sample outputs from this monstrosity (I love that somebody made this but I feel guilty consuming so much electricity to create nsfw lmao)

  • xinranli@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Great work! Does anyone happen to have a guide, tutorial, or paper on how to combine or interleave models together? I would also love to try it out frankensteining models

  • xadiant@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Any tips/attempts on frankensteining 2 yi-34b models together to make a ~51B model?

  • Ok_Library5522@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Is this model better at writing stories? I want to compare it with goliath, which I use on my local computer. Goliath can write stories, but he definitely lacks originality and creativity

  • trollsalot1234@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I…also L ove oliath! I … i RALLY hope you’re is better. A random hallucination walks up and punches trollsalot right in the face. WHY ARENT WE HAVING SEX YET! she screams

    • nsfw_throwitaway69@alien.top
      cake
      OPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Try it out and let me know! I included Nous-Hermes in the merge because I’ve found it to be one of the best roleplaying models that doesn’t hallucinate too much. However, Nous-Hermes also tends to lack a bit in terms of the prose it writes, from my experience. I was hoping to get something that’s coherent most of the time and creative.

    • Charuru@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I don’t think so, this is something you do when you’re GPU poor, closedai would just not undertrain their models in the first place.

  • Distinct-Target7503@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    That’s a great work!

    Just a question… Have anyone tried to fine tune one of those “Frankenstein” models? Some time ago (when the first “Frankenstein” came out, it was a ~20B model) I read here on reddit that lots of users agreed that a fine tune on those merged models would have “better” results since it would help to “smooth” and adapt the merged layers. Probably I lack the technical knowledge needed to understand, so I’m asking…

  • Aaaaaaaaaeeeee@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    possibly even going even larger than 120b parameters

    I didn’t know that was possible, have people made a 1T model yet?

  • noeda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I will set this to run overnight on Hellaswag 0-shot like I did here on Goliath when it was new: https://old.reddit.com/r/LocalLLaMA/comments/17rsmox/goliath120b_quants_and_future_plans/k8mjanh/

    Thanks for the model! I started investigating some approaches to combine models and see if it can be better than its individual parts. Just today I finished code to use a genetic algorithm to pick out parts and frankenstein 7B models together (trying to prove that there is merit to this approach using smalelr models…but we’ll see).

    I’ll report back on the Hellaswag results on this model.

  • Saofiqlord@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Huh, interesting weave, it did feel like it made less spelling and simple errors when comparing it to goliath.

    Once again Euryale’s included. The lack of xwin makes it better imo, Xwin may be smart but it has repetition issues at long context, that’s just my opinion.

    I’d honestly scale it down, there’s really no need to go 120b, from testing a while back ~90-100b frankenmerges have the same effect.

    • CardAnarchist@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Goliath makes spelling errors?

      I’ve only used a handful of mistral 7B’s due to constraints but I’ve never seen it make any spelling errors.

      Is that a side effect of merging?

      • noeda@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I have noticed too, that Goliath makes spelling errors somewhat frequently, more often than other models.

        It doesn’t seem to affect the “smarts” part as much though. It otherwise still makes high quality text.

    • nsfw_throwitaway69@alien.top
      cake
      OPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Crap, what’s your setup? I tested it with a single 48GB card but if you’re using 2x 24 then it might not work. I’ll have to make a 2.8 bpw quant (or get someone else to do it) so that it’ll work with card splitting.

      • a_beautiful_rhind@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I have 2x3090 for exl2. I have tess and goliath and both fit with ~3400 context so somehow your quant is slightly bigger.

        • nsfw_throwitaway69@alien.top
          cake
          OPB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Venus-120b is actually a bit bigger than Goliath-120b. Venus has 140 layers and Goliath has 136 layers, so that would explain it.

          • a_beautiful_rhind@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Makes sense… it’s doing pretty well. Like the replies. Set the limit to 3400 in tabby, no oom yet but using 98%/98%. I assume this means I can bump up the other models past 3400 too if I’m using tabby and autosplit.

  • CheatCodesOfLife@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    haha damn, I should have taken the NSFW warning seriously before clicking the huggingface link in front of people lol.

    Is this model any good for SFW stuff?

    • nsfw_throwitaway69@alien.top
      cake
      OPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah I wanted a picture to go with the model and that’s what stable diffusion spat out :D

      And I haven’t tried it for SFW stuff but my guess is that it would work fine.

    • uti24@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Is this model any good for SFW stuff?

      Every uncensored llm I tried worked fine with SFW stuff.

      If you are talking about story telling they might be even better that SFW models. And I also never seen NSFW/uncensored models to write NSFW stuff unless explicitly asked to do so.

  • th3st0rmtr00p3r@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I could not get any of the quants loaded, looks like the config is looking for XX of 25 safetensors

    FileNotFoundError: No such file or directory: "models\Venus-120b-v1.0\model-00001-of-00025.safetensors"
    

    with exl2-3.0bpw having only XX of 06 safetensors

    • nsfw_throwitaway69@alien.top
      cake
      OPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      🤔 How are you trying to load it? I tested both quants in text-generation-webui and they worked fine for me. I used exllama2_hf to load it

      • panchovix@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Models on ooba without “exl” on the folder name will redirect to transformers by default, so that may be the reason he got that by default.

      • th3st0rmtr00p3r@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Defaulted to transformers, loaded right away in ExLlamav2_HF, thank you I didn’t know what I don’t know.