I have a query which costs around 300 tokens, and as 1000 tokens cost 0,06 USD that translates to roughly 0,02 USD for that request.

Let say I would deploy a LocalLLaMA on RunPod, on one of the cheaper machines, would that request be cheaper than running it on GPT4?

  • DarthNebo@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Try HuggingFace Endpoints with any of the cheap T4 based serverless instances these go to sleep as well in 15mins.