Title says it all. Why spend so much effort finetuning and serving models locally when any closed-source model will do the same for cheaper in the long run. Is it a philosophical argument? (As in freedom vs free beer) Or are there practical cases where a local model does better.

Where I’m coming from is the requirement of a copilot, primarily for code but maybe for automating personal tasks as well, and wondering whether to put down the $20/mo for GPT4 or roll out my own personal assistant and run it locally (have an M2 max, compute wouldn’t be a huge issue)

  • Bright-Question-6485@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Maybe I missed it but the most important argument might have slipped which is quite simply that GPT4 looks and feels good, however if you have a clear task (anything, literally - examples are data structuring pipelines, information extraction, repairing broken data models) then a fine tuned llama model will make GPT4 look like a toddler. It’s crazy and if you don’t believe me I can only recommend to everyone to give it a try and benchmark the results. It is that much of a difference. Plus, it allows you to iron out bugs in the understanding of GPT4. There is clear limits to where prompt engineering can take you.

    To be clear I am really saying that there is things GPT4 just cannot do where a fine tuned llama just gets the job done.