Viewing a single comment thread. View all comments

ben_db t1_iyrqqfk wrote

The problem is, stable diffusion isn't a fixed length operation, yes it's 50 iterations but those iterations will vary massively based on the input term, output resolution, channels as well as about 10 other settings.

33

AkirIkasu t1_iyu41du wrote

If you go to the actual github project you can see the full benchmarks and settings.

8

ben_db t1_iyukygi wrote

They should give comparisons in the article, that's the point.

Are Apple users just fine with this? It seems to happen a lot for Apple products.

Always "30% better" or "twice the performance" but never any actual meaningful numbers.

4

designingtheweb t1_iyvj0b5 wrote

TIL 9to5mac = Apple

4

ben_db t1_iyvm7c9 wrote

They do it just as much as Apple do, it seems common to Apple devices

−3

Spirit_of_Hogwash t1_iysr7mx wrote

In the ars technica article they say that with a rtx 3060 it takes 8 seconds and with the M1 ultra 9 seconds.

So once again Apple's "fastest in the world" claims are defeated by a mid-range GPU.

https://arstechnica.com/information-technology/2022/12/apple-slices-its-ai-image-synthesis-times-in-half-with-new-stable-diffusion-fix/

−6

dookiehat t1_iyt3gjg wrote

I think it is a software or compiler (?) issue. Stable Diffusion was written for nvidia gpus w cuda cores. Idk what sort of translation happens but it probably leads to inefficiencies not experienced with nvidia.

19

sylfy t1_iytgr3x wrote

CUDA and the accompanying cudnn libraries are highly specialised hardware and software libraries for machine learning tasks provided by Nvidia, that they have been working on over the past decade.

It’s the reason Nvidia has such a huge lead in the deep learning community, and the reason that their GPUs are able to command a premium over AMD. Basically all deep learning tools are now designed and benchmarked around Nvidia and CUDA, with some also supporting custom built hardware like Google’s TPUs. AMD is catching up, but the tooling for Nvidia “just works”. This is also the reason people buy those $2000 3090s and 4090s, not for gaming, but for actual work.

Frankly, the two chips are in completely different classes in terms of power draw and what they do (one is a dedicated GPU, the other is a whole SoC), it’s impressive that the M1/M2 even stays competitive.

15

DiscoveryOV t1_iytqahn wrote

Fastest in the world in their class.

I don’t see any ultrabooks with a 3060 in them, nor any even close to as powerful as a fanless 20w one.

13

vandalhearts t1_iyvioba wrote

The article compares an M1 Ultra 64 Core to a RTX 3060. That's a desktop system (M1 Studio) which starts @ $5k USD.

3

Spirit_of_Hogwash t1_iytsbuk wrote

I dont see any ultrabook or even 5kg laptop with a M1 ultra either.

Edit: you know what actually you can buy many ultrabooks with the RTX 3060 ( Asus ROG zephyrus G14, Dell XPS, razer blade 14 and many more <20mm thick laptops) while Apple laptops's gpu is at best half a m1ultra.

So yeah talk about fanboys who cant even google.

−9

AkirIkasu t1_iyu4g6q wrote

You never will, given that ultrabook is a trademark of Intel.

0

Spirit_of_Hogwash t1_iyu6yj7 wrote

The previous fanboy said ultrabook when everyone else was comparing desktop to desktop.

But it turns out the rtx 3060 is available in many ultrabooks but the m1ultra is not available in any laptop format.

−1

kent2441 t1_iyu5dkz wrote

Apple has never said their GPUs were the fastest in the world. Why are you lying?

−1

Spirit_of_Hogwash t1_iyu5xnj wrote

https://birchtree.me/content/images/size/w960/2022/03/M1-Ultra-chart.jpeg

Dude, Apple is always claiming fastest in the world .

In this specific case Apple DID claim that they are faster than the "highest end discrete GPU" while in this and most real world tests is roughly equivalent to a midrange Nvidia GPU.

You should ask yourself why Apple is the one who lies and you believe them without checking the reality.

9