minus-squarejulylu@alien.topBtoLocalLLaMA•Automatic hallucination detection using inconsistency scoringlinkfedilinkEnglisharrow-up1·1 year agothat’s cool. For RAG tasks, it still have hallucination if the retrieved doc is unrelated to the question. Can this method be used in RAG? I am not sure. linkfedilink
minus-squarejulylu@alien.topBtoLocalLLaMA•NeuralChat 7B: Intel’s Chat Model Trained with DPOlinkfedilinkEnglisharrow-up1·1 year agoMaybe for RAG, short answer is less possible for hallucination?I will test more. thanks linkfedilink
minus-squarejulylu@alien.topBtoLocalLLaMA•NeuralChat 7B: Intel’s Chat Model Trained with DPOlinkfedilinkEnglisharrow-up1·1 year agosame, i found it tends to give short response. linkfedilink
that’s cool. For RAG tasks, it still have hallucination if the retrieved doc is unrelated to the question. Can this method be used in RAG? I am not sure.