• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: October 30th, 2023

help-circle
  • If you’re looking at cloud / API services, the best option is probably something like either TogetherAI or DeepInfra. TogetherAI tops out at 0.0009 / 1K for 70B models and DeepInfra tops out at 0.0007 / 1K input and 0.00095 output for 70B models. Both of those are well below Turbo and GPT4 price levels. Big caveat being this will only work if the model you want to use is up there. If it isn’t and you want to deploy / use said model, RunPod is probably the “cheapest” option, but it charges money as long as the pod is active, and it’ll burn through money very quickly. In that case, RunPod likely won’t be much, if any, cheaper than using GPT4.



  • Assuming number of FLOPs in compute is 6ND (N = number of parameters, D = dataset size in tokens) you could take the full RedPajama dataset (30T tokens) and a 500B parameter model and it’d come out to:

    6*(30*10^12)*(500*10^9) = 9*10^25

    In order to qualify, you would need a cluster that could train this beast in about:

    10^26 / 10^20 = 1000000 seconds = 11.57 days



  • The scaling laws have quite a bit more wiggle room if you’re willing to accept less benefit for your buck at training time. They mention that it isn’t a hard threshold but more like a region where you can expect diminishing returns, which is true. The thing the original Chinchilla paper didn’t emphasize is that diminishing returns aren’t really “diminishing”. Yes, you have to put in more training compute to reach a given level of quality, but more often than not training compute pales in comparison to inference compute, since whereas the former is a large cost you pay once and then you’re done, the latter is a continuous cost you pay for as long as you host your LLM. Given enough time, inference compute will always pull ahead of training compute.

    If you take a look at the scaling equations they used (the exact constants used may vary between model architectures and datasets, but they still give a reasonably good approximation) we have, for a model with N parameters and a dataset size of D tokens the loss is given by (see eq. 10 in 2203.15556.pdf (arxiv.org) ):

    L(N, D) = 1.69 + 406.4 / N^0.34 + 410.7 / D^0.28

    If you were to take Llama 2 70B’s values and plug them in, we’d end up with:

    L(70*10^9, 2*10^12) = 1.69 + 406.4 / (70*10^9)^0.34 + 410.7 / (2*10^12)^0.28 = 1.9211

    By comparison, if we were to take Turbo’s values and plug them in (here I’ll use 13T training tokens, since that’s the popular estimate for GPT-4’s training set size so I’ll assume they used it for Turbo as well) we’ll end up with:

    L(20*10^9, 13*10^12) = 1.69 + 406.4 / (20*10^9)^0.34 + 410.7 / (13*10^12)^0.28 = 1.905

    So in this case, Turbo actually does end up coming ahead of Llama 2 by virtue of the larger training corpus. It also means that if future models significantly increase the pretraining dataset size (whether that’s Llama 3, Llama 4, Mistral, or some other one) there’s a very real chance that smaller models can reach this level of quality in the future


  • The main question is why price it so far below Davinci level, which is 175B?

    There’s still a lot of room for models to be trained on more data. Take a look at the Llama papers - at the time training was stopped the loss was still going down. Mistral is on par with L2 13B to L1 30B and it’s a measly 7B model. If GPT-4 truly has a dataset of 13T tokens, the scaling law equations from the Chinchilla paper illustrate that a 20B model trained on 13T tokens would reach lower loss levels than a 70B model trained on 2T tokens. Llama 1 already illustrated that a 7B model could outperform previous open source models (GPT-J-6B, Fairseq-13B, GPT-NeoX-20B, OPT-66B) just by virtue of training on more data and it’s the reason the Llamas are so good to begin with

    Model size is important, sure, but there are a lot of important things besides model size when it comes to training a good model