I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.

Why are you interested in running local models? What are you doing with them?

Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?

  • IONaut@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Part of it is trying to find models that will work in my own projects so I don’t have to rely on OpenAI or some other API. And I’m also interested in the lightweight end of things that may run well on isolated devices without a connection like robots or wearable assistants in isolated places.

    If I’m developing something with AutoGen or MemGPT that uses a lot of API calls, testing using a local LLM is free.

    Not to mention the company I work for is interested in using AI but would rather not send customer data to any of these big companies.