AnimalNo5205 t1_ir9xnpe wrote
To head off the “but we barely have DDR5” comments, the G is important here. This is memory intended for use by graphics cards and GDDR6 has been a think for years now. AMD tried to move the industry towards a new standard called High Bandwidth Memory with their RX Vega products but that effort never got anywhere
Avieshek OP t1_ir9zr4m wrote
Samsung hasn't forgotten about HBM in their press release.
Techn028 t1_ira4rwk wrote
Hbm was so cool on the fury cards, my only wish is that there was more of it and the chip could push harder
Pavetsu t1_ir9zh28 wrote
Consoles also use only GDDR, there's n DDR in them.
AfraidBreadfruit4 t1_irb0eqq wrote
>there's n DDR in them.
It's GDR in english /s
RAZR31 t1_iragpla wrote
There is if you put the game disc in.
Jaohni t1_irajujt wrote
I wouldn't say that HBM never went anywhere; it was a high bandwidth, high latency alternative to GDDR's (relatively) low bandwidth, low latency, which was achieved by essentially overclocking the interconnects in GDDR, leading to HBM being much more power efficient. And then they overclocked their Vega series through the moon, but anyway...
...HBM is still alive and well, but it's more commonly used in server and workstation applications ATM, where bandwidth is worth as much as the compute in the right workload. We might actually see some high end gaming GPUs in a year and a half to two and a half years here, as certain incoming trends in game rendering (raytracing, machine learning, and so on), can benefit from increased bandwidth, though at least on the AMD side I think they'd prefer to do 3d stacked cache as beyond having a higher effective bandwidth, it also basically improves the perceived latency, and power efficiency is more heavily improved than via using HBM.
oscardssmith t1_irdexbw wrote
As I understood it, HBM isn't higher latency. It's just more expensive. Is that incorrect?
Jaohni t1_irdfkqr wrote
So, imagine you have one lane to transfer data from memory to a processor. You're probably going to clock that lane as quickly as you possibly could, right? Well, that means it'll have the lowest latency possible, too. But, if you added a second lane, you might not be able to totally double bandwidth, because you might not be able to clock both lanes as high as just the one, but maybe you get 1.8 or 1.9x the bandwidth of just the one...At the cost of slightly higher latency, in this case, 1.1x the latency.
The same idea is basically true of HBM versus GDDR. GDDR essentially has overclocked interconnects to get certain bandwidth targets, and as a consequence has lower latency, but with HBM it's difficult to clock all those interconnects at the same frequency, so you get higher bandwidth and higher latency overall. Because it's less efficient to overclock those lanes, though, HBM ends up being less power hungry (usually).
6SixTy t1_irbkko9 wrote
I wouldn't call it HBM as a concept an abject failure, as the tech has found a niche in Nvidia and AMD's top dog price-is-no-object accelerators.
Problem was that AMD tried to sell cards with the tech to consumers, which don't really benefit from the high bandwidth part of the tech, so all it really did at the end of the day was bump up the cost and limit VRAM amounts.
Also, tbd on new info from RDNA3, as its supposed to include MCM.
ChrisFromIT t1_irdfpaa wrote
The issue with HBM is that the costs are way to expensive, hence why you typically only find them in enterprise grade GPUs.
Viewing a single comment thread. View all comments