I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.
Why are you interested in running local models? What are you doing with them?
Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?
1 - Horny stuff
2 - Waiting for a smart enough model that can be fed a JSON in a fixed interval, providing to it information of the environment, as in a simplified version of our brain getting a constant stream of info. That object sometimes will contain user input or not, (user input as in the regular questions we asks these models). Some models can keep up with this for a bit but eventually lose track. If anyone has done anything like this or has any tips / suggestions , I’ll happily accept them.