davidmezzetti@alien.topB to LocalLLaMAEnglish · 1 year agoRAG in a couple lines of code with txtai-wikipedia embeddings database + Mistralalien.topimagemessage-square15fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1imageRAG in a couple lines of code with txtai-wikipedia embeddings database + Mistralalien.topdavidmezzetti@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square15fedilink
minus-squareDaniyarQQQ@alien.topBlinkfedilinkEnglisharrow-up1·1 year agoLooks like it can work with AWQ models. Can it work with GPTQ (Exllama2) and GGUF models?
minus-squaredavidmezzetti@alien.topOPBlinkfedilinkEnglisharrow-up1·1 year agoIt works with GPTQ models as well, just need to install AutoGPTQ. You would need to replace the LLM pipeline with llama.cpp for it to work with GGUF models. See this page for more: https://huggingface.co/docs/transformers/main_classes/quantization
Looks like it can work with AWQ models. Can it work with GPTQ (Exllama2) and GGUF models?
It works with GPTQ models as well, just need to install AutoGPTQ.
You would need to replace the LLM pipeline with llama.cpp for it to work with GGUF models.
See this page for more: https://huggingface.co/docs/transformers/main_classes/quantization