Hi team
I’m new to this and installed LM studio (I’m on a M1 Pro 16GB RAM). I’m looking for a model and I get a lot of options - which one to go for and why? (per the screenshot below)
Also, can you help me understand the capabilities of the machine, and some of the models you’d recommend for your use cases/ fun?
Thank you!!!
The options you are seeing are different quants of the same model. For 7Bs, you generally want to stick to Q4_K_M and up. Generally, the bigger the file size, the closer its quality is to the original unquantized model.
For 7B models, your 16GB unified memory should be able to run the Q6_K variant with 8192 context size no problem. The model you’re looking at is good but it’s slightly dated at this point. Hard to recommend models without knowing your specific use case for it, but here goes nothing:
I recommend trying out some 13Bs as well. In my experience, a good 13B is still better than a good 7B (for roleplaying purposes at least). With 13Bs, I recommend using Q5_K_M variants with 6144 context size. KoboldCpp sets the role scaling automatically, but I’m not sure how LMStudio handles it. Here are some models you can try out: