Title says it all. Why spend so much effort finetuning and serving models locally when any closed-source model will do the same for cheaper in the long run. Is it a philosophical argument? (As in freedom vs free beer) Or are there practical cases where a local model does better.

Where I’m coming from is the requirement of a copilot, primarily for code but maybe for automating personal tasks as well, and wondering whether to put down the $20/mo for GPT4 or roll out my own personal assistant and run it locally (have an M2 max, compute wouldn’t be a huge issue)

  • Nonetendo65@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    GPT-4 is plagued with outages. I’ve found the API unreliable to use in a production setting. Perhaps this will improve with time :)

  • Bright-Question-6485@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Maybe I missed it but the most important argument might have slipped which is quite simply that GPT4 looks and feels good, however if you have a clear task (anything, literally - examples are data structuring pipelines, information extraction, repairing broken data models) then a fine tuned llama model will make GPT4 look like a toddler. It’s crazy and if you don’t believe me I can only recommend to everyone to give it a try and benchmark the results. It is that much of a difference. Plus, it allows you to iron out bugs in the understanding of GPT4. There is clear limits to where prompt engineering can take you.

    To be clear I am really saying that there is things GPT4 just cannot do where a fine tuned llama just gets the job done.

  • Independent_Key1940@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s not just philosophical. When you have a technology that holds power to change the world, it should either be destroyed or given into everyone’s hands so that people can adapt and be at easy with it. Otherwise the person inventing the technology will rule the world. Or in today’s world, will influence politics, will have support from powerful people, will attract wealth, and will make mistakes which could destroy he world.

    So it’s not just about morals, it’s about survival.

  • Mission_Revolution94@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    because they are run by the borg (microsoft)

    never think that ease is the only reason to do something privacy security

    and overall control of your own domain are very good reasons.

    another great reason local never says no.

  • kivathewolf@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I like the analogy that Andrej Karpathy posted on X sometime back. LLM OS

    Think of LLM as an OS. There are closed source OS like Windows and Mac, and then there are open source OS based on Linux. Each has its place. For most regular consumers, windows and mac are sufficient. However Linux has its place for all kinds of applications (from the Mars rover, to your raspberry pi home automation project). The LLMs may evolve in a similar fashion. For highly specific use cases, it maybe better to use a small LLM fine tuned for your application. In cases where data sovereignty is important, it’s not possible to use open AIs tools. Next, let’s say you have an application where u need an AI service and internet is not available. Local models are the only way you can go about.

    It’s also important to understand that when you use GPT4, you aren’t using an LLM, but a full solution, where there’s the LLM, RAG, classic software functions (math), internet browsing and may be even other “expert LLMs”. When you download a model from Hugging face and run it, you are just using one piece of the puzzle. So yes, your results will not be comparable to GPT4. What open source gives you, is the ability to make a system like GPT4, but you need to do the work to get it there.

  • jpalmerzxcv@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Data collection. You’re sending all of your queries to the GPT4 server, to people you don’t know. Who knows what they’re doing with it?

  • ThisGonBHard@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    closed-source model

    You gave your own answer:

    Not monitored

    Not controlled

    Uncensored

    Private

    Anonymous

    Flexible

  • ekowmorfdlrowehtevas@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    “Those who would give up privacy to purchase a temporarily better large language model interface, deserve neither” - Benjamin Franklin

  • RadiantQualia@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    GPT-4 is much much better for most normal use cases. Hopefully that changes one day, but OpenAI’s lead might just keep getting bigger.

  • naoyao@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I was long a hold out for ChatGPT because I wasn’t confident about OpenAI’s handling of my personal information. I’ve started using Llama just a couple weeks ago, and whilst I’m happy that it can be run locally, I’m still looking forward to open source LLMs, because Llama isn’t actually open source.

  • ccbadd@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    For me it’s just censorship and privacy. Maybe api costs once we get more apps will be an issue too.

  • tu9jn@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You wont get banned from local for asking the wrong questions, and GPT4 has hourly limit as well

    If you already have the hardware why not try it? It’s literally free.

  • edwios@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No, nothing I am working on or will be working on will go to any uncontrolled whereabouts. Period. Besides, it’ll get banned immediately anyway, so why bother lol

  • son_et_lumiere@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Once you get into the automation aspect, you’re going to need to hit the OAI API, and that’s an additional cost per 1k tokens beyond the $20 per month. That’ll start to add up fast when you’re passing a lot of data back and forth often.

  • wiesel26@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Control. You can have the control or you can let someone else have the control. Open source LLMs give The masses and other option. An option they don’t have to pay for. Your question is like saying why don’t you use Microsoft 360 instead of open office.