Hi community,
i am writing my own GUI in wich i want to use a LLM completely local. The problem is i dont know how to start with the LLM.
Can someone explain to me how the first steps work to integrate/work with the LLM or does someone know some good tutorials?
The LLM is downloaded localy. Now i need to integrate a library or something? sry i could not find a lot useful/direct information about the topic.
Thank you very much in advance!
Thanks for the comments so far!
My intention was to learn about LLM and to put that into practice. I am aware of already existing projekts and have something working already. It just feels a bit overloaded or impractical.
I wanted to learn about writing an application using LLMs. My goal was to write something that I could load a set of PDFs and ask questions about them., all on my PC.
I used PySide6 (Qt for Python) for the GUI since I was pretty familiar with Qt and C++, and I’m not a fan or running apps in a browser.
I used a combination of HuggingFace APIs and Langchain to write the application itself, and ended up with a somewhat generalized application that I could load different models.
It works, maybe not perfectly, but I did accomplish what I wanted, to learn about implementing an application.
I did the same thing with Stable Diffusion models, but with just HuggingFace APIs, no Langchain.
If you want to learn how to run an llm in inference mode locally, you should code that at the command line, not waste time building a GUI app. Build a GUI app if you want to learn how to build GUI apps, or if you’ve already proven out the engine at the command line, and now need a pretty GUI around it.