• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: November 8th, 2023

help-circle




  • I haven’t used gptq in a while, but i can say that gguf has 8 bit quantization, which you can use with llamacpp. Furthermore, if you use the original huggingface models, the ones which you load using the transformers loader, you have options in there to load in either 8 or 4bit.