• 1 Post
  • 24 Comments
Joined 1 year ago
cake
Cake day: October 30th, 2023

help-circle
  • you need to have a knowledge base (with your legal, or financial data) and use semantic search to feed the context for a model fine-tuned to follow instructions and answer from the context,

    for small (7B) models there are OpenHermes-2.5-Mistral-7B, Mistral-7B-OpenOrca, dolphin-2.1-mistral-7b,

    for bigger models there is Nous-Capybara-34B








  • Multiple passes at lower learning rates isn’t supposed to produce different results.

    Overfitting is not a technical challenge, its a mathematical property which undeniably exists when ever the training data is smaller than the full problem domain and simultaneously the learning rate (importantly - multiplied by the number of epochs!) would result in a higher specialization ration on the learned to unobserved data than would be expected based on the ration of the learned to unobserved size.

    Basically if you learn 1 digit addition but half your training sets involve the left number being 1 and none of your training sets involve your left number being 5 then likely your model will treat 5 and 1 the same (since it’s so over trained on examples with 1s)

    GPT-4:

    The statement contains several inaccuracies:

    1. Multiple passes at lower learning rates: It’s not entirely true that multiple passes with lower learning rates will produce identical results. Different learning rates can lead to different convergence properties, and multiple passes with lower learning rates can help in fine-tuning the model and potentially avoid overfitting by making smaller, more precise updates to the weights.
    2. Overfitting as a mathematical property: Overfitting is indeed more of an empirical observation than a strict mathematical property. It is a phenomenon where a model learns the training data too well, including its noise and outliers, which harms its performance on unseen data. It’s not strictly due to the size of the training data but rather the model’s capacity to learn from it relative to its complexity.
    3. Learning rate multiplied by the number of epochs: The learning rate and the number of epochs are both factors in a model’s training process, but their product is not a direct measure of specialization. Instead, it’s the learning rate’s influence on weight updates over time (across epochs) that can affect specialization. Moreover, a model’s capacity and the regularization techniques applied also significantly influence overfitting.
    4. Example of learning 1 digit addition: The example given is somewhat simplistic and does not fully capture the complexities of overfitting. Overfitting would mean the model performs well on the training data (numbers with 1) but poorly on unseen data (numbers with 5). However, the example also suggests a sampling bias in the training data, which is a separate issue from overfitting. Sampling bias can lead to a model that doesn’t generalize well because it hasn’t been exposed to a representative range of the problem domain.

    Overall, while the intention of the statement is to describe overfitting and the effects of learning rates, it conflates different concepts and could benefit from clearer differentiation between them.