AdamEgrate@alien.topBtoLocalLLaMA•New Microsoft codediffusion paper suggests GPT-3.5 Turbo is only 20B, good news for open source models?English
1·
1 year agoScaling laws suggest that you can reduce parameter count by increasing the number tokens. There is a limit however and that seems to be at around 32% of the original model size: https://www.harmdevries.com/post/model-size-vs-compute-overhead/
So that would put the resulting model at around 56B. Not sure how they got it down further, maybe through quantization.
Yeah. They want people to believe that if it’s made by a human it is fair use for training models, but it’s it’s made by an AI it’s not.