Submitted by samobon t3_1040w4q in MachineLearning
I wonder if this is the beginning of dissolution of NVIDIA's monopoly on AI.
Submitted by samobon t3_1040w4q in MachineLearning
I wonder if this is the beginning of dissolution of NVIDIA's monopoly on AI.
Also this. $AMD still makes it explicit that they officially support Rocm only on CDNA GPUs, and even then it's only under Linux. That's an immediate turn off for lots of beginner GPGPU programmers who'll immediately flock to CUDA as it works with any not too old gaming GPU from Nvidia. It's astonishing how Lisa Su still hasn't realized the gravity of this blunder.
I agree with you both that until small academic labs are able to use entry level GPUs for research, there will not be a mass adoption.
Maybe it was intended to keep AMD on a different path than NVIDIA. It looks incredibly stupid not to hop on the AI wave.
They've added official support for Navi21 under Linux some time ago. It's still very small number of supported devices comparing to NVIDIA but at least it's no longer required to purchase CDNA accelerators to get started.
Except there is an ecosystem monopoly at the cluster level too because some of the most established, scalable, and reliable software (like those used in fields like bio-informatics as an example) only provide CUDA implementations of key algorithms and being able to accurately reproduce results computed by them is vital.
This essentially limits those software to only running on large CUDA clusters. You can't reproduce the results without the scale of a cluster.
Consider software for processing Cryo-Electron Microscopy and Ptychography data. Very very few people are actually "developing" those software packages, but thousands of researchers around the world are using them at scale to process their micrographs. Those microscopists are not programmers, or really even cluster experts, and they just don't have the skillsets to develop on these code bases. They just need it work reliably and reproducibly.
I've been working in HPC on a range of large scale clusters for a long time. There has been a massive and dramatic demographic shift in terms of the skillsets that our cluster users have. A decade ago you wouldn't dream of letting someone not a HPC expert anywhere near your cluster. If a team of non-HPC people needed HPC you'd hire HPC experts into your team to handle that for you and tune the workloads onto the cluster and develop the code to make it work best. Now we have an environment where non-HPC people can pay for access and run their workloads directly because they leverage these pre-tinned software packages.
128GB HBM would fit some serious models on a single device. But I have yet to see any real progress from AMD (something that I can buy) that would make me consider changing workflow away from nvidia hardware.
PyTorch 2.0 moving away from directly depending on CUDA and using instead Triton is good news for AMD. In the Triton Github repo they say that AMD GPU support is under development. AMD needs to invest some resources to help there.
AMD solutions have been in "development" for as long as i've been in contact with the space. The approaches rise and fall but never deliver fully. Maybe it'll be different in the future, who knows
Because AMD never goes all in in software. Hope that view will probably change with Victor Peng and $AMD starts throwing billions into software.
Wierdly enough, Xilinx is a huge investor in software and has absolutely amazing software support and customer service. I hope that translates over to AMD.
>AMD needs to invest some resources to help there.
That's where it will fail
Tired of reddit cynics. Saying something will fail before it even starts
"Those that fail to learn from history are doomed to repeat it."
Especially on the software side, AMD has a habit of releasing something and then not doing much for continued support, expecting the community to foot the labor
Previously amd didnt have the budget for it. They do now and have really only had it the last two-ish years.
Will they now put resources towards it? I hope so. But it also appears amd is trying to get products in mega dc/supercomputer applications and spreading use that way.
Isn't their continued support one of the selling points for AM5? That they supported previous gen for ages and they plan to again
Software.
Having AI compute hardware is rather pointless without the supporting software.
Nvidia has an entire CUDA ecosystem for developers to use
Absolutely agree, it's been a while since I've had AMD hardware, but I'd consider it again (especially CPU)...I just haven't been aware of specific issues with software either, I mean Intel, AMD and Nvidia all have had bugfixes and patching with drivers and firmware. Is there something I've missed about AMD and software?
BTW, I haven't had enough disposable income to upgrade so I've been stuck on 4590K for about 6 years and I hate my motherboard software (that's Asus bloatware) and had so much trouble getting the NVMe to work and RAID...but once I did it's been OK, and the 1070 I have is getting a bit to small for working with ML/AI, but what can you do...it still runs most newish games too.
>Is there something I've missed about AMD and software?
They have this https://gpuopen.com/
Which seems great in theory, but some of that hasn't been touched in a long time.
Radeon Rays: May 2021
They'll release something, do a bunch of initial work on it, and then it fades
Well that is a genuine shame, nvidia really needs some competition in this space. I'm sure plenty of researchers and enthusiasts would happily use some different hardware (as long as porting was easy) I've written some CUDA C++ and it's not bad. Manufacturer-specific code always feels a bit gross, but the GPU agent based modeling framework I was using was strictly CUDA.
Nvidia needs some competition fr fr. I can't even consider buying AMD because the entire data science community has pinned to CUDA
Rocm users have been failed for the past 3 years tho
Any Info about AMD APUs? By now I gave up hoping for AMD making ROCm available for APUs. I dont know much about Triton: Does it support APUs like 5600g?
Rocm works pretty well these days on my 6900xt?
But there is no official support for your card.
For an individual that’s pretty true of any card - Nvidia will probably ignore your random CUDA error and redirect you to the forums to figure it out wether it is a k80 or an H100.
What models have you tried? Wonder what the gaps between CUDA and ROCm are.
So far a lot of them - have not had any issues with various stable diffusion models, deolidfy, bloom3b, and basically anything I have tried
Can you bench training with https://github.com/karpathy/nanoGPT and 100M+ GPT model?
Nope. I develop free time for AMD chipsets. Inferior performance than Nvidia all over the place, and the support sucks ass. Prepare to fix the 'supported' libraries yourself.
AMD should seriously invest in developing a credible software stack rather hyping new chips.
They're trying pretty hard, but Nvidia has spent thousands of man years on this stuff and built ecosystem and community around it. It's not easy. Plus it's hard for AMD to hire the best software folks.
How many APUs can be connected together via IF? Hopefully they can do 8-16 to challenge DGX.
Cerebus is possibly ending NVIDIA and AMd. Those two are tied to an older design which was good for a while, but has run its course and is now on decline.
Cerebras is pretty well suited for large language models like GPT3. Their latest generation product can be clustered easily to train huge models. I wouldn't say they're ending AMD and NVDA though, but in order for huge language models to be democratized, some disruptive technologies have to happen. No one other than whales today can afford to train GPT3.
It will not be long until we see these chips designed by ai specifically designed to design chips for the purpose of designing super efficient chips for designing chips. This is it...the chips designing the chips to design the chips. Singularity here we come.
No, it's train models to solve problems to make more data to train models. That's how it will go.
Nhabls t1_j32q6zp wrote
The "monopoly" is from the ecosystem mostly, not the hardware itself. Practicioners and researchers have a much better time using consumer/entry level professional nvidia hardware. So they use nvidia.
Mind you that in the supercomputer level there is no real "monopoly" as those people just develop their solutions from the ground up.