We’re proud to introduce Rocket-3B 🦝, a state-of-the-art 3 billion parameter model!
🌌 Size vs. Performance: Rocket-3B may be smaller with its 3 billion parameters, but it punches way above its weight. In head-to-head benchmarks like MT-Bench and AlpacaEval, it consistently outperforms models up to 20 times larger.
🔍 Benchmark Breakdown: In MT-Bench, Rocket-3B achieved an average score of 6.56, excelling in various conversation scenarios. In AlpacaEval, it notched a near 80% win rate, showcasing its ability to produce detailed and relevant responses.
🛠️ Training: The model is fine-tuned from Stability AI’s StableLM-3B-4e1t, employing Direct Preference Optimization (DPO) for enhanced performance.
📚 Training Data: We’ve amalgamated multiple public datasets to ensure a comprehensive and diverse training base. This approach equips Rocket-3B with a wide-ranging understanding and response capability.
👩💻 Chat format: Rocket-3B follows the ChatML format.
For an in-depth look at Rocket-3B, visit Rocket-3B’s HugginFace page
I think “The Bloke” takes requests for GUFF conversions. Might want to check hugging face.
!RemindMe 7 days
TheBloke/rocket-3B-GGUF · Hugging Face