We’re proud to introduce Rocket-3B 🦝, a state-of-the-art 3 billion parameter model!

🌌 Size vs. Performance: Rocket-3B may be smaller with its 3 billion parameters, but it punches way above its weight. In head-to-head benchmarks like MT-Bench and AlpacaEval, it consistently outperforms models up to 20 times larger.

https://preview.redd.it/fxmz9sl1ls1c1.png?width=1273&format=png&auto=webp&s=63c3838cf4f01f7efcad9ec92b97c1e493111842

🔍 Benchmark Breakdown: In MT-Bench, Rocket-3B achieved an average score of 6.56, excelling in various conversation scenarios. In AlpacaEval, it notched a near 80% win rate, showcasing its ability to produce detailed and relevant responses.

https://preview.redd.it/rpgaknn3ls1c1.png?width=1280&format=png&auto=webp&s=6d2d7543f1459ceae7f96ad05ea064e8f8076517

🛠️ Training: The model is fine-tuned from Stability AI’s StableLM-3B-4e1t, employing Direct Preference Optimization (DPO) for enhanced performance.

📚 Training Data: We’ve amalgamated multiple public datasets to ensure a comprehensive and diverse training base. This approach equips Rocket-3B with a wide-ranging understanding and response capability.

👩‍💻 Chat format: Rocket-3B follows the ChatML format.

For an in-depth look at Rocket-3B, visit Rocket-3B’s HugginFace page

  • Sweet_Protection_163@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This smells like leftovers…

    We’ve been having “pretraining on the test set” for weeks and I’m craving something else.