I guess the question is what is the order we’re talking about for requiring to step up to more parameters? I understand its in billions of parameters and that they are basically the weights between the data it was trained on and is used to predict words (I think of it as a big weight map), so like you can expect “sharp sword” more often than “asprin sword.”

Is there a limit to the data-size used to train the model to the point that you’ll hit a plateau? Like, I imagine training against Shakespire would be harder than Poe because of all the made up words Shakespire uses. I’d probably train Shakespire with his works + wikis and discussions on his work.

I know that’s kind of all over the place, I’m kind of fumbling at the topic trying to get a grasp so I can start prying it open.

  • creaturefeature16@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have a similar question as OP. What if you wanted to train a model specifically on coding? And even more specifically in say, just a particular library?

    • CKtalon@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You are probably talking about fine tuning then (pre)training a model. There are models that were trained for coding like codellama and all the variants. You could probably train on the library’s code but I doubt you will get much out of it. Perhaps the best way is to create some instruction data based on the library (either manually or synthetic) and fine tune on that.

      • paradigm11235@alien.topOPB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m glad I goofed in my question because your response was super helpful, but I now realize I was missing the terminology when I posted. I was talking about fine tuning an existing model with a specific goal in mind, (re: poetry)