• its_just_andy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    if you’re interested in running your own models for any reason, you really should build your own evaluation dataset for the scenarios you care about.

    at this point, all the public benchmarks are such a mess. Do you really care if the model you select has the highest MMLU? Or, do you care only that it’s the best-performing model for the scenarios you actually need?

    • Exios-@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This seems to me at least like the most logical conclusion. I’m currently working on developing some level of moral/ethical dilemma scenarios to interpret different perspectives and response strategies, for my personal use cases of discussion and breaking down topics into manageable levels and then exploring the nuances, it is very effective. Seems to be far too broad of a “use case” to define one set of benchmarks unless it’s incredibly comprehensive and refined over and over as trends develop

    • shibe5@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      With the abundance of models, most developers and users have to select a small subset of available models for own evaluation, and that has to be based on some already available data about models’ performance. At that stage, selecting models with, for example, highest MMLU score is one way to go about it.