Vectara’s Hallucination Evaluation Model and leaderboard was launched last week. I notice Mistral having a hallucination rate of 9.4% compared to 5.6% for Llama2. Any thoughts?
Source: https://github.com/vectara/hallucination-leaderboard
“llama2 7b > llama2 13b”
lol
I don’t think they actually tested base models. Look at the description of their methods - they don’t run the models themselves, they only use public apis They say they used mistral-instruct, not Mistral. Those are not the same models, you shouldn’t put “Mistral” in the table if you ran tests on “Mistral-Instruct”. There is no information what actual model was used for llama test, or the output of the test. I suspect that they used llama-2-chat models which were RHLFed. Mistral Instruct is not RHLFed. It’s likely that RHLF can reduce hallucination rate and we are seeing it’s effects.
Noob question: What is the recommended method to interact with a non finetuned/chat model?
Oof 3% is a lot
How is possible that Llama2 13B and 7B have lower hallucination rate than Claude?