Rutabaga-Agitated@alien.topBtoLocalLLaMA•Should corporations and smaller businesses be training, refining, and running local LLMs to avoid the future cost of the new cloud services M$ and Amazon are rolling out?English
1·
1 year agoYou are right. But if you have a chinese customer for example, there might come up different problems like with NVIDIA and GPUs. Independency is key for a lot of players.
We created a 4x 4090 RTX setup through a mining rig That is 96gb VRAM for round about 10k… does not get cheaper than that. Best compute per cost rn I think.
https://preview.redd.it/nfq4olntq54c1.png?width=1812&format=pjpg&auto=webp&s=a5308bb5eec778072f8d6a394b5243ca33c7fd87
.*