https://www.amazon.se/-/en/NVIDIA-Tesla-V100-16GB-Express/dp/B076P84525 price in my country: 81000SEK or 7758,17 USD

My current setup:
NVIDIA GeForce RTX 4050 Laptop GPU
cuda cores: 2560
memory data rate 16.00 Gbps

My laptop GPU works fine for most ML and DL tasks. I am currently finetuning a GPT-2 model with some data that I scraped. And it worked surprisingly well on my current setup. So it’s not like I am complaining.

I do however own a stationary PC with some old GTX 980 GPU. And was thinking of replacing that with the V100.

So my question to this community is: For those of you who have bought your own super-duper-GPU. Was it worth it. And what was your experience and realizations when you started tinkering with it?

Note: Please refrain giving me snarky comments about using Cloud GPU’s. I am not interested in that (And I am in fact already using one for another ML task that doesn’t involve finetuning) . I am interested to hear about the some hardware hobbyists opinion on this matter.

  • synn89@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I dug into this a lot back when I was building 2 AI servers for home use, for both inference and training. Dual 4090’s are the best you can get for speed at a reasonable price. But for the best “bang for your buck” you can’t beat used 3090’s. You can pick them up reliably for $750-800 each off of Ebay.

    I went with dual 3090’s using this build: https://pcpartpicker.com/list/V276JM

    I also went with NVLink which was a waste of money. It doesn’t really speed things up as the board can already do x8 PCI on dual cards.

    But a single 3090 is a great card you can do a lot with. If that’s too much money, go with a 3060 12gb card. The server oriented stuff is a waste for home use. Nvidia 30xx and 40xx series consumer cards will just blow them away in a home environment.