I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.
Why are you interested in running local models? What are you doing with them?
Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?
Quite some the stuff that commercial/corporate models won’t let me do and which I wouldn’t do even if they let me. Private stuff. Yes, NSFW can of course be a part of it.
Furthermore, things where I think the commercial/corporate models are too expensive (no, I have not checked my power bill yet…).