Not super knowledgeable about all the different specs of the different Orange PI and Rasberry PI models. I’m looking for something relatively cheap that can connect to WiFi and USB. I want to be able to run at least 13b models at a a decent tok / s.

Also open to other solutions. I have a Mac M1 (8gb RAM) and upgrading the computer itself would be cost prohibitive for me.

  • ThinkExtension2328@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Honestly the m1 is probably the cheapest solution you have , get your self LLM studio and try out a 7b_K_M model your going to struggle with anything larger then that. But that will let you get to experience what we are all playing with.

    • ClassroomGold6910@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      3b’s work amazingly and super smoothly but 7b models while running at a fair 15 tokens per second prevent me from using any other application at the same time and occasionally freeze my mouse and screen temporarily until the response is finished

    • ClassroomGold6910@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      What’s the difference between `K_M` models, also why is `Q_4` legacy but not `Q_4_1`, it would be great if someone could explain that lol

      • Sea_Particular_4014@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Q4_0 and Q4_1 would both be legacy.

        The k_m is the new “k quant” (I guess it’s not that new anymore, it’s been around for months now).

        The idea is that the more important layers are done at a higher precision, while the less important layers are done at a lower precision.

        It seems to work well, thus why it has become the new standard for the most part.

        Q4_k_m does the most important layers at 5 bit and the less important ones at 4 bit.

        It is closer in quality/perplexity to q5_0, while being closer in size to q4_0.

  • gpt872323@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    have this same question. I am thinking of a mini pc more powerful than both and price relatively ok. Mini pc not with nuc rather amd or intel mobile series.

  • knownboyofno@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What do you define as “decent” tokens per second? Do you have a budget yet? Do you want to run the 13B at full precision or a quantized precision?

  • ButlerFish@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you want to run the models posted here, and don’t care so much about physical control of the hardware they are running on, then you can use various ‘cloud’ options - runpod and vast are straight forward and cost about 50 cents an hour for a decent system.

      • ButlerFish@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        What I do is, sign up to run pod and buy $10 of credit, then go to the “templates” section and use it to make a cloud VM pre-loaded with the software to run LLMs. One of their ‘official’ templates called " RunPod TheBloke LLMs" should be good. I usually use the A100 pod type, but you can get bigger or smaller / faster or cheaper.

        Depending on the Readme for the template you can click Connect to Jupyter and run the notebook that came with the template to start services, download your model from huggingface or whatever. This is fine for experimenting with LLMs.

        If what you had planned was some kind of home project like building your own home assistant then you have a bunch of other problems to solve like how to do that cheaply, trigger words and TTS/STT. You might use the serverless or spot instance functionality Runpod has and figure out the smallest pod / LLM that works for your use. You’d probably do the microphone and triggerword stuff on your Pi and have it connect to the runpod server to run the TTS/STT and LLM bits.

        • herozorro@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Remember when you finish for the day that if you don’t delete the pod (and any storage you created) your credit balance will reduce while you are sleeping. But at least it can’t go negative and send you a big bill like evil AWS.

          do they charge per hour like a parking meter or only when the pod is used

          • ButlerFish@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            You get charged while to pod is running, and the pod is running until you turn it off on the runpod control panel even if you aren’t actually doing anything on there right now.

            If you added a volume (cloud hard drive) when you created it then, even when it is turned off, you are paying 10 cents / gigabyte / month to rent that hard drive so your data is still there when you turn it on again.

            For niche usecases where it needs to be available but isn’t running stuff most of the time like that home assistant I mentioned, look at runpod serverless which is much more fiddly and hard to use but will let you pay essentially per prompt… for playing with LLMs and interacting it’s much better to just rent a server and turn it off when you are done.