Wondering what everyone thinks in case this is true. It seems they’re already beating all open source models including Llama-2 70B. Is this all due to data quality? Will Mistral be able to beat it next year?
Edit: Link to the paper -> https://arxiv.org/abs/2310.17680
Scaling laws suggest that you can reduce parameter count by increasing the number tokens. There is a limit however and that seems to be at around 32% of the original model size: https://www.harmdevries.com/post/model-size-vs-compute-overhead/
So that would put the resulting model at around 56B. Not sure how they got it down further, maybe through quantization.
The scaling laws have quite a bit more wiggle room if you’re willing to accept less benefit for your buck at training time. They mention that it isn’t a hard threshold but more like a region where you can expect diminishing returns, which is true. The thing the original Chinchilla paper didn’t emphasize is that diminishing returns aren’t really “diminishing”. Yes, you have to put in more training compute to reach a given level of quality, but more often than not training compute pales in comparison to inference compute, since whereas the former is a large cost you pay once and then you’re done, the latter is a continuous cost you pay for as long as you host your LLM. Given enough time, inference compute will always pull ahead of training compute.
If you take a look at the scaling equations they used (the exact constants used may vary between model architectures and datasets, but they still give a reasonably good approximation) we have, for a model with N parameters and a dataset size of D tokens the loss is given by (see eq. 10 in 2203.15556.pdf (arxiv.org) ):
L(N, D) = 1.69 + 406.4 / N^0.34 + 410.7 / D^0.28
If you were to take Llama 2 70B’s values and plug them in, we’d end up with:
L(70*10^9, 2*10^12) = 1.69 + 406.4 / (70*10^9)^0.34 + 410.7 / (2*10^12)^0.28 = 1.9211
By comparison, if we were to take Turbo’s values and plug them in (here I’ll use 13T training tokens, since that’s the popular estimate for GPT-4’s training set size so I’ll assume they used it for Turbo as well) we’ll end up with:
L(20*10^9, 13*10^12) = 1.69 + 406.4 / (20*10^9)^0.34 + 410.7 / (13*10^12)^0.28 = 1.905
So in this case, Turbo actually does end up coming ahead of Llama 2 by virtue of the larger training corpus. It also means that if future models significantly increase the pretraining dataset size (whether that’s Llama 3, Llama 4, Mistral, or some other one) there’s a very real chance that smaller models can reach this level of quality in the future