Let’s say you spend an unholy amount of processing time training a 70b. You like history. You want a good LLM for historical info.

By the time you upload it the LLM is outdated. Now what?

If you want it to speak accurately about modern events you’d have to retrain it again. Repeating the process over and over, because time keeps moving on while your LLM does not.

This clearly could become more efficient. Optimally, each subject would probably need to be considered a separate file while the central “brain” of the LLM becomes its own structure.

As it stands, updating the entire LLM is very cost prohibitive and makes no sense if you’re trying to work out specific data points. Why, for example, would you want to update the entire Cantonese dictionary when you just want to fix the list of Alaskan donut shops?

I understand that the tech currently has to treat both the information and the “thinking” behind an LLM as one and the same. It seems more efficient, more effective, to separate the two.

  • Only-Letterhead-3411@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think there’s a solution to that; LLM internet access.AI just needs to be smart enough to be able to use tools like a human. Then it can effectively search for the right question and then extract the right answer from the internet.

    • waxbolt@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Frustratingly, we don’t have any good plugins for this on oobabooga. The authors seem to have been scared off by fast API changes.