Has anyone explored Intel’s new model yet? It’s a 7B model trained on Slim Orca, which is currently the top 7B model on the HF open LLM leaderboard.

I’ve found other 7B models to be surprisingly helpful, especially for annotation/data extraction tasks, so I’m curious if it’s worth replacing teknium/OpenHermes-2.5-Mistral-7B with this model.

  • pseudonerv@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I feel it has some formulaic writing, but overall feels fine. Here’s its response to your post:

    In the realm of AI models, Intel has indeed introduced a new 7B neural chatbot named “Intel neural-chat-7b-v3-1.” This model was trained on Slim Orca, which is currently leading the HF open LLM leaderboard for 7B language models.

    As you’ve experienced with other 7B models, they can be quite helpful in various tasks such as annotation and data extraction. The Intel neural-chat-7b-v3-1 model might offer similar benefits or even surpass them due to its advanced training on Slim Orca.

    Comparing it to teknium/OpenHermes-2.5-Mistral-7B, you may find that the new Intel model could potentially provide better performance in certain tasks. However, it’s essential to test both models and evaluate their capabilities based on your specific requirements before making a decision.

    In conclusion, exploring the Intel neural-chat-7b-v3-1 model is definitely worth considering if you are looking for an advanced 7B language model that can assist with various tasks.

  • Feztopia@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Actually there was an even higher ranked model which is now deleted, by the same a account which made SlimOrca. It had very high scores in one of the two new benchmarks in the leaderboard. I think that was a test model, so expect a new model coming soon that will be even better.

  • CardAnarchist@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I could only get pretty muddled responses from the model.

    Despite seemingly having a simple prompt template I suspect I didn’t enter all the data correctly into simpletavern as the outputs I was getting were similar to when I have a wrong template selected for a model.

    Shrugs

    If a model wants to be successful they should really pick a standard template (pref ChatML) and clearly state that’s what they are using.

  • vatsadev@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    IMPORTANT!

    this isnt trained, its another mistral finetune, with dpo, but with slimorca, not ultrachat.

    I would be using openHermes, its much more trialed, and its proven solid

      • vatsadev@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        OpenHermes 2.5 is the latest version, but the openHermes series has a history in ai models of being good, and I used it for some function calling, its really good

  • backprop_wolf@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I find it slightly better than mistral and system prompt does wonderful into aligning better to my needs