I have a query which costs around 300 tokens, and as 1000 tokens cost 0,06 USD that translates to roughly 0,02 USD for that request.

Let say I would deploy a LocalLLaMA on RunPod, on one of the cheaper machines, would that request be cheaper than running it on GPT4?

  • tenmileswide@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Depends entirely on what model you want. The llama-2 13b serverless endpoint would only cost $0.001 for that request on Runpod.

    If you rent a cloud pod it’s going to cost the same per hour no matter how much or little you send to it so it’s based entirely on the number of requests you can get sent to it.