If i have multiple 7b models where each model is trained on one specific topic (e.g. roleplay, math, coding, history, politic…) and i have an interface which decides depending on the context which model to use. Could this outperform bigger models while being faster?

  • remghoost7@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I believe this is what GPT4 actually is.

    I remember reading somewhere that it’s actually a mix of 8 different models and it directs your question depending on the context of it.

    Would be neat to implement on a local level though. Haven’t seen many people on the local side talk about doing this.

    • feynmanatom@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Lots of rumors, but tbh I think it’s highly unlikely they’re using an MoE. MoEs work on batch size = 1 (you can take advantage of sparsity) but not on larger batch sizes. You would need so much RAM and would miss out on the point of using an MoE.

      • remghoost7@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Lots of rumors…

        Very true.

        We honestly have no clue what’s going on behind ClosedAI’s doors.

        I don’t know enough about MoEs to say one way or the other, so I’ll take your word on it. I’ll have to do more research on them.

    • FullOf_Bad_Ideas@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Jondurbin made something like this with qlora.

      The explanation that gpt-4 is MoE model doesn’t make sense to me. Gpt4 api is 30x more expensive than gpt-3-5-turbo. Gpt-3-5 turbo is 175B parameters, right? So, if they had 8 220B experts, it wouldn’t need to cost 30x more, it would be 20-50% more for API use. There was also some speculation that 3.5 turbo is 22B. In that case it also doesn’t make sense to me that it would be 30x as expensive.

      • Cradawx@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        No, several sources include Microsoft have said GPT 3.5 Turbo is 20B. GPT 3 was 175B, and GPT 3.5 Turbo was about 10x cheaper on the API than GPT 3 when it came out so it makes sense.

      • AutomataManifold@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Just to note: don’t read too much into OpenAI’s prices. They’re deliberately losing money as a market-capturing strategy, so it’s not guaranteed that there’s a linear relationship between what they charge for a given service and what their actual costs are.

    • jxjq@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Does this use of mixture-of-experts mean that multiple 70b models would perform ?better than multiple 7b models

        • extopico@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          big is an understatement. Please do correct me if I got it wildly wrong, but it appears to be a 3.6TB colossus.

  • yahma@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yes. This is known as Mixture of Experts (MOE).

    We already have several promising ways of doing this:

    1. QMoE: A Scalable Algorithm for Sub-1-Bit Compression of Trillion-Parameter Mixture-of-Experts Architectures. Paper - Github
    2. S-Lora: Serving thousands of concurrent adapters.
    3. Lorax: Serve hundreds of concurrent adapters.
    4. LMoE: Simple method of dynamically loading Loras
    • sampdoria_supporter@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I can’t believe I hadn’t run into this. Would you indulge me on the implications for agentic systems like Autogen? I’ve been working on having experts cooperate that way rather than being combined into a single model.

  • feynmanatom@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This might be pedantic, but this is a field with so much random vocabulary and it’s better for folks to not be confused.

    MoE is slightly different. An MoE is a single LLM with gated layers that “select” which layers to route embeddings/tokens to. It’s pretty difficult to scale and serve in practice.

    I think what you’re referring to is more like a model router. You can use a general LLM to “classify” a prompt and then route the entire prompt to a downstream LLM. It’s unclear if this would be faster than a 70B LLM since you would repeat the encoding phase and have some generation, but it could certainly be better.

    • wishtrepreneur@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You can use a general LLM to “classify” a prompt and then route the entire prompt to a downstream LLM.

      why can’t you just train the “router” LLM on which downstream LLM to use and pass the activations to the downstream LLMs? Can’t you have “headless” (without encoding layer) downstream LLMs? So inference could use a (6.5B+6.5B) params model with the generalizability of a 70B model.

      • feynmanatom@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Hmm, not sure if I track what an encoding layer is? The encoding phase involves filling the KV cache across the depth of the model. I don’t think there’s an activation you could just pass across without model surgery + additional fine tuning.

  • DanIngenius@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I really like the idea, i think multiple 13b models would be awesome! Managed by a highly configured routing model that is completely uncensored is something i want to do, i want to crowd fund a host with this, DM if you are interested!