Currently running them on-CPU:

  • Ryzen 9 3950x

  • 64gb DDR4 3200mhz

  • 6700xt 12gb (does not fit much more than 13b models, so not relevant here)

While running on-CPU with GPT4All, I’m getting 1.5-2 tokens/sec. It finishes, but man is there a lot of waiting.

What’s the most affordable way to get a faster experience? The two models I play with the most are Wizard-Vicuna 30b, and WizardCoder and CodeLlama 34b

  • fediverser@alien.top
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This post is an automated archive from a submission made on /r/LocalLLaMA, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.

    Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !localllama@poweruser.forum that can benefit from your contribution and join in the conversation.

    Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.