• bot-333@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I see that their distilled model is much worse than StableLM 3E1T, so the finetuning improved a lot. Unfortunately they didn’t release the datasets(Would that still be considered Open Source?). Also I’m pretty sure my StableLM finetunes are better in the Open LLM Benchmarks, they just don’t allow StableLM models to be submitted.