Today I tried a number of private (local) opensource #GenAI #LLM servers in Docker. I only run LLM servers in Docker. Without Docker I’m pretty sure my desktop would quickly become an angry bag of snakes in no time (Snakes, pythons, geddit? 🐍 😁 ). For context, I’m evaluating these LLM components to figure out what part they might play in my Backchat plugin project for Backstage from Spotify (https://via.vmw.com/backchat)
Here’s what I discovered:
* PrivateGPT has promise. It offers an OpenAI API compatible server, but it’s much to hard to configure and run in Docker containers at the moment and you must build these containers yourself. If it did run, it could be awesome as it offers a Retrieval Augmented Generation (ingest my docs) pipeline. The project’s docs were messy for Docker use. (https://github.com/imartinez/privateGPT)
* OpenVINO Model Server. Offers a pre-built docker container, but seems more suited to ML rather than LLM/Chat use cases. Also, It doesn’t offer and OpenAI API. Pretty much a non-starter for my use case but an impressive project. (https://docs.openvino.ai/2023.1/ovms_what_is_openvino_model_server.html)
* Ollama Web UI & Ollama. This server and client combination was super easy to get going under Docker. Images have been provided and with a little digging I soon found a `compose` stanza. The chat GUI is really easy to use and has probably the best model download feature I’ve ever seen. Just one problem - doesn’t seem to offer OpenAI API compatibility which limits it’s effectiveness for my use case. (https://github.com/ollama-webui/ollama-webui)
In the end I liked Ollama/Ollama Web UI a lot. If OpenAI API compatibility gets added, it could be my go-to all round LLM project of choice - but not yet.
You can use ollama with litellm to get OpenAI api.
Curious to hear what other UIs people use and for what purpose / what they like about each (like Oogabooga, or Kobold).
SillyTavern for text chat. A true power-user LLM frontend, so I always use the same interface, no matter which backend I need (e. g. koboldcpp or oobabooga’s text-generation-webui or even ChatGPT/GPT-4).
Going beyond text, I recently started using Voxta together with VaM/Virt-A-Mate. That brings my AI’s avatar into the real world, thanks to the Quest 3’s augmented/mixed reality features. Here’s an example by one of Voxta’s devs that showcases what that looks like (without mixed reality, though). Sure, it’s just for fun right now, but I see the potential for it to become more than an entertaining novelty.
I recently started using Voxta together with VaM/Virt-A-Mate
oh my god…how are you liking it so far? I might disappear from society for a few months based on your answer…
It’s like VR itself - amazing technology, mind-blowing, but needs engagement and motivation to be really useful. Text chat is easier and with my limited time, I’m not using this as much as I’d like to. Still, there’s a lot of active development and waiting potential, so I’m looking forward how this evolves.
At our lab, we’re using the latest version of the ollama-webui and it seems to have the OpenAI API support already, among many another new features (and an updated UI, which imo is a lot better). You might want to update to the latest version!
I have ollama on my Mac (not Docker) and installed the ollama web UI. It works fine but their instruction on running ollama in a LAN network doesn’t work for me. The flags they mention to add the CLI command throw an error (esp. the
*
part).Which dockerfile did you build to get PrivateGPT to work? There are not docs, multiple dockerfiles, and just building them doesnt seem to work