so I got this shiny new GPU and I want to push it to the limit. What’s the most powerful, smartest model out there? Ideally something with as much long-term memory as possible. I’m coming off of ChatGPT 4 and want something local and uncensored
so I got this shiny new GPU and I want to push it to the limit. What’s the most powerful, smartest model out there? Ideally something with as much long-term memory as possible. I’m coming off of ChatGPT 4 and want something local and uncensored
A 4.x bit 70b model trained with 16k context with exllamav2 fits with room to spare. If you can add a 3090 or 4090 as well you can include a 6bit 32k 70b. That’s my standard inference setup and it covers a lot of ground.
I also have a 3080ti! I didn’t even know it was possible to combine them. Where can I go to learn how?
Hugging face will have some models you can download. Look at GitHub for llama and other repositories that will get you going. I just started playing around with local LLMs and it’s been an interesting journey mainly because I am using MacBook Pro - just the M2Pro with 16gig shared memory. So far I have been playing with 13b models with decent results. You may want to check out on GitHub localaivoicechat for something that will allow you to talk with your voice and have realtime voice playback of the generated ai response. There are many others there like oobabooga, sillytavern, text-generation-webui and many others. Start looking into those for now and it will open that can o worms for you. LM Studio is also something to check out. Good luck. It’s much easier and more compatible using PC based ai generating tools than on a Mac but so far I’m doing ok. You will have it much better and more compatible. Hope that helps a bit.