I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.

Why are you interested in running local models? What are you doing with them?

Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?

  • softwareweaver@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    We are using Local Models to power Fusion Quill’s AI Word Processor. Fusion Quill is a Windows App on the Microsoft Store.

    We are currently using the Mistral 7B model to do various tasks like Summarization, Expand Content, etc. See our YouTube Video
    https://youtu.be/883IoDlRzpM

    We expect Local AI models to continue to evolve but we also support Open AI’s Chat GPT API and vLLM APIs.

    Regards,
    Ash
    FusionQuill.AI