I’ve had good results using https://github.com/DevashishPrasad/CascadeTabNet
I’ve had good results using https://github.com/DevashishPrasad/CascadeTabNet
Use something like lmql, guidance or guiderails to get the model to say it doesn’t know. I’ve also had some success with the airoboros fine-tuned models, which have this behaviour defined in the dataset using a specific prompt.
I think you don’t have cuda properly setup. Use pip install --verbose
to see the compilation messages when it’s trying to build llamacpp with cuda. You might need to manually set the CUDA_HOME environment variable.
I haven’t used gptq in a while, but i can say that gguf has 8 bit quantization, which you can use with llamacpp. Furthermore, if you use the original huggingface models, the ones which you load using the transformers loader, you have options in there to load in either 8 or 4bit.
Which frontend is that?
This is a really good question and i’d also like to understand how to use the knowledge base with an LLM