Why is there no analog to napster/bittorent/bitcoin with LLMs?

Is there a technical reason that there is not some kind of open source LLM that we can all install on our local host which contributes computing power to answering prompts, and rewards those who contribute computing power by allowing them to enter more prompts?

Obviously, there must be a technical reason which prevents distributed LLMs or else it would have already been created by now.

  • deviantkindle@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I was thinking of distributed MoEs as well.

    Question I have is how do you route queries? I don’t know how to do that if all the Es are in the same cluster let alone distrivuted.

    • dobkeratops@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I was thinking of distributed MoEs as well.Question I have is how do you route queries? I don’t know how to do that if all the Es are in the same cluster let alone distrivuted.

      yeah its a work in progress. Its not trivial to setup . it’s easy to imagine a way it could be done , but it all has to be built, tested, refined.

      llama cpp is out there, I am a c++ person but I dont have deep experience with LLMs (how to fine tune etc) generally and have other projects in progress. but if you look around in the usual places with some search terms you’ll find the attempts in progress, and they probably could use volunteers.

      my aspirations are more toward the vision side, I’m a graphics person and need to get on with producing synthetic data or something

    • madmax_br5@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I don’t know if there’s much value there when LORA’s are easily portable — you can just select the right lora as needed. One base model instance on one machine, many potential experts. This has been demonstrated.