• drooolingidiot@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is amazing. Yesterday we got Deepseek, and today we’re getting Qwen. Thank you for releasing this model!

    I’m looking forward to seeing comparisons

  • Wonderful_Ad_5134@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If the US keeps going full woke and are too afraid to work as hard as possible on the LLM ecosystem, China won’t wait twice before winning this battle (which is basically the 21th century battle in terms of technology)

    Feels sad to see the US decline like that…

  • carbocation@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It would be great to see gguf versions. (At least, my workflow right now goes via ollama.) How are people running Qwen-72B locally right now?

  • QuieselWusul@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Why did so many new chinese 70b foundation models release in a day? (this one, Deepseek, XVERSE) Is there any reason they all released in such a short time?

  • pseudonym325@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The last Qwen didn’t really take off as base model for further fine-tunes.

    Looking forward to the results on the German data protection training benchmark ;)

  • norsurfit@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    In my informal testing, Qwen72b is quite good. I anecdotally rate it stronger than Llama 2 from the few tests that I have conducted.

  • ambient_temp_xeno@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The first thing I looked for was the number of training tokens. I think yi34 got a lot of benefit from 3 trillion, so this model having 3 trillion bodes well.

  • PookaMacPhellimen@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    https://github.com/QwenLM/Qwen

    Also released was a 1.8B model.

    From Bunyan Hui’s Twitter announcement:

    “We are proud to present our sincere open-source works: Qwen-72B and Qwen-1.8B! Including Base, Chat and Quantized versions!

    🌟 Qwen-72B has been trained on high-quality data consisting of 3T tokens, boasting a larger parameter scale and more training data to achieve a comprehensive performance upgrade. Additionally, we have expanded the context window length to 32K and enhanced the system prompt capability, allowing users to customize their own AI assistant with just a single prompt.

    🎁 Qwen-1.8B is our additional gift to the research community, striking a balance between maintaining essential functionalities and maximizing efficiency, generating 2K-length text content with just 3GB of GPU memory.

    We are committed to continuing our dedication to the open-source community and thank you all for your enjoyment and support! 🚀 Finally, Happy 1st birthday ChatGPT. 🎂 “