I want to see if some presets and custom modifications work well in benchmarks, but running HellaSwag or MMLU looks too complicated for me, and it takes 10+ hours to upload 20GBs of data.

I assume there isn’t a convenient webui for chumps to run benchmarks with (apart from ooba perplexity, which I assume isn’t the same thing?). Any advise?

  • uhuge@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What is needed to get it done? Can anyone help or only a few days of your focused time are expected to lead to it?

    • mattapperson@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s just a side project for now in my free time. Started building it for my own sanity. But it’s not really in any shape that someone could just jump right in and help. So unless you’re a VC willing to throw money at me to make it my full time job lol… probably a couple weeks?

      My goal is to make it not just a tool to run evals, but to create a holistic build, test, use toolkit to do everything from:

      • Cleaning datasets
      • Generating synthetic training data from existing data and files
      • Creating LoRAs and full fine tunes
      • Prompt evaluation and automated iterations
      • Running evaluations/benchmarks.

      Trying to do all that in a way that is appreciable and easy to use and understand for your average software engineer, not just ai scientists. This stuff should require the setup of 20 libraries, writing all the glue code, or require knowing Python.