Yesterday Microsoft and NVIDIA have announced a new partnership (https://nvidianews.nvidia.com/news/nvidia-introduces-generative-ai-foundry-service-on-microsoft-azure-for-enterprises-and-startups-worldwide) which goes much beyond existing ones.
I just hope there is a way of compiling cuda code for the new architecture. I have a couple of algorithms that i really dont want to rewrite :(
that i really dont want to rewrite
Manually writing code? What is this? 2023?
Would be nice to have some options in this space for sure
Is this the beginning of the end of CUDA dominance?
Not unless intel/AMD/MS/whoever ramps up their software API to the level of efficiency and just-works-edness that cuda provides.
I don’t like nvidia/cuda any more than the next guy, but it’s far and away the best thing going right now. If you have an nvidia card, you can get the best possible AI performance from it with basically zero effort on either windows or linux.
Meanwhile, AMD is either unbearably slow with openCL, or an arduous slog to get rocm working (unless you’re using specific cards on specific linux distros). Intel is limited to openCL at best.
Until some other manufacturer provides something that can legitimately compete with cuda, cuda ain’t going anywhere.
Do they plan on offering Mi200 or MI300 to the public?
Can’t wait to try out a GPU with more than 80gb vram
MI300 is most probably going to have a different programming model.
Long way to go but more options will be beneficial to everyone.
But what about developer support. Can they come up with a developer ecosystem that can compete with CUDA that’s 20+ years old. Will developers feel comfortable switching? Dev communities takes years to build like cuda or Java. Can it be manufactured overnight?