• AdamEgrate@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    There is strong evidence in the literature that you can reduce parameter count if you increase the number of training tokens (as well as compute time). Not saying that’s what they did here, but also I wouldn’t be surprised given how important it is for inference to be as efficient as possible here.