• Thistleknot@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I had to read that a few times.

    Auto-Regressive is like forecasting, it’s iterative.

    LLM reliability is this vague concept of trying to get to the right answer.

    Hence tree of thoughts as a way to ‘plan’ to that vague concept of the right answer.

    Circumvents the univariate next token prediction limitation with parallel planning.

    • Willing_Breadfruit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      ermm, idk what you mean by any of those words.

      Auto-regressive just means it’s a time series that depends on its previous predictions.

      So, when you predict a token at time t – you condition on the previous tokens you already predicted.

      Consider, “the cat in the hat”. A transformer that predicted it would have predicated it in the following manner (assuming that each of the words are a token bc I’m lazy):

      -P(“the”|prompt) is highest

      -P(“cat”|“the”,prompt) is highest

      -P(“in”|“the”,“cat”,prompt) is highest

      So you can see there is a dependency between each of its predictions and the next prediction. This is what is meant by auto-regressive.

      • Thistleknot@alien.topOPB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes I understand all that

        Auto regressive is like arima In time series forecasting

        Then rnn came along

        Then sequence to sequence

        They all have the last prediction is used as input for the next prediction in common

        Hence auto regressive