Comments

You must log in or register to comment.

ben_db t1_iyrmvzu wrote

I can forgive them not giving a comparison to other architectures but why don't they give a reference to the timing before the optimisations? 18 seconds in meaningless.

236

hrkrx t1_iyrp1m8 wrote

My not further defined calculation machine(TM) can generate an image of unknown size in less then a random amount of time.

150

ben_db t1_iyrq17d wrote

Wow, that's an amount of time different to the previous calculation machine!

60

Themasterofcomedy209 t1_iyruepz wrote

My digital electronic programmable machine consisted simply of six hydrocoptic marzelvanes, so fitted to the ambifacient lunar waneshaft that sidefumbling was effectively prevented.

12

[deleted] t1_iyrq0sy wrote

[deleted]

39

ben_db t1_iyrqqfk wrote

The problem is, stable diffusion isn't a fixed length operation, yes it's 50 iterations but those iterations will vary massively based on the input term, output resolution, channels as well as about 10 other settings.

33

AkirIkasu t1_iyu41du wrote

If you go to the actual github project you can see the full benchmarks and settings.

8

ben_db t1_iyukygi wrote

They should give comparisons in the article, that's the point.

Are Apple users just fine with this? It seems to happen a lot for Apple products.

Always "30% better" or "twice the performance" but never any actual meaningful numbers.

4

designingtheweb t1_iyvj0b5 wrote

TIL 9to5mac = Apple

4

ben_db t1_iyvm7c9 wrote

They do it just as much as Apple do, it seems common to Apple devices

−3

Spirit_of_Hogwash t1_iysr7mx wrote

In the ars technica article they say that with a rtx 3060 it takes 8 seconds and with the M1 ultra 9 seconds.

So once again Apple's "fastest in the world" claims are defeated by a mid-range GPU.

https://arstechnica.com/information-technology/2022/12/apple-slices-its-ai-image-synthesis-times-in-half-with-new-stable-diffusion-fix/

−6

dookiehat t1_iyt3gjg wrote

I think it is a software or compiler (?) issue. Stable Diffusion was written for nvidia gpus w cuda cores. Idk what sort of translation happens but it probably leads to inefficiencies not experienced with nvidia.

19

sylfy t1_iytgr3x wrote

CUDA and the accompanying cudnn libraries are highly specialised hardware and software libraries for machine learning tasks provided by Nvidia, that they have been working on over the past decade.

It’s the reason Nvidia has such a huge lead in the deep learning community, and the reason that their GPUs are able to command a premium over AMD. Basically all deep learning tools are now designed and benchmarked around Nvidia and CUDA, with some also supporting custom built hardware like Google’s TPUs. AMD is catching up, but the tooling for Nvidia “just works”. This is also the reason people buy those $2000 3090s and 4090s, not for gaming, but for actual work.

Frankly, the two chips are in completely different classes in terms of power draw and what they do (one is a dedicated GPU, the other is a whole SoC), it’s impressive that the M1/M2 even stays competitive.

15

DiscoveryOV t1_iytqahn wrote

Fastest in the world in their class.

I don’t see any ultrabooks with a 3060 in them, nor any even close to as powerful as a fanless 20w one.

13

vandalhearts t1_iyvioba wrote

The article compares an M1 Ultra 64 Core to a RTX 3060. That's a desktop system (M1 Studio) which starts @ $5k USD.

3

Spirit_of_Hogwash t1_iytsbuk wrote

I dont see any ultrabook or even 5kg laptop with a M1 ultra either.

Edit: you know what actually you can buy many ultrabooks with the RTX 3060 ( Asus ROG zephyrus G14, Dell XPS, razer blade 14 and many more <20mm thick laptops) while Apple laptops's gpu is at best half a m1ultra.

So yeah talk about fanboys who cant even google.

−9

AkirIkasu t1_iyu4g6q wrote

You never will, given that ultrabook is a trademark of Intel.

0

Spirit_of_Hogwash t1_iyu6yj7 wrote

The previous fanboy said ultrabook when everyone else was comparing desktop to desktop.

But it turns out the rtx 3060 is available in many ultrabooks but the m1ultra is not available in any laptop format.

−1

kent2441 t1_iyu5dkz wrote

Apple has never said their GPUs were the fastest in the world. Why are you lying?

−1

Spirit_of_Hogwash t1_iyu5xnj wrote

https://birchtree.me/content/images/size/w960/2022/03/M1-Ultra-chart.jpeg

Dude, Apple is always claiming fastest in the world .

In this specific case Apple DID claim that they are faster than the "highest end discrete GPU" while in this and most real world tests is roughly equivalent to a midrange Nvidia GPU.

You should ask yourself why Apple is the one who lies and you believe them without checking the reality.

9

Avieshek OP t1_iyrsbe3 wrote

The M1 MacBook Air... is a fanless, ultra lightweight laptop with no dedicated GPU and 20-hour battery life.…. I’d say that’s pretty impressive when we are yet to see a Mac Pro on  Silicon.

19

Cindexxx t1_iyrwekl wrote

I've just been wondering if the Pro haven't used Apple silicon because it doesn't scale up to it. Their chips are insanely impressive, but can that 20W thing scale up to 120W and actually have 5-6x the power? And if it can, why haven't they done it?

15

Avieshek OP t1_iyrxwzr wrote

There’s already been benchmark leaks with 96GB of RAM, there’s a Covid-situation going on in China currently and likely the launch has been postponed to the end of the financial year.

12

AkirIkasu t1_iys6fy9 wrote

Perhaps? The M1 Ultra is basically two M1 chips glued together with a bunch of extra GPU cores.

There isn't an M2 Ultra right now, but it's probably only a matter of time until that gets released.

5

Eggsaladprincess t1_iysrcum wrote

I think M1 Max is basically 2 M1 chips and M1 Ultra is basically 4 M1 chips

4

StrangeCurry1 t1_iyvgj5g wrote

The M1 Max is an M1 Pro with extra Gpu cores

The M1 Ultra is 2 M1 Max’s

The Mac Pro is expected to have a chip made of 2 M1 Ultras

3

Cindexxx t1_iysxx32 wrote

Isn't that going to limit the single core to being not much higher than the original M1? Maybe with more power and cooling they can crank it up a bit, but it seems like that's the limit.

2

Eggsaladprincess t1_iyt5x7v wrote

Not really sure what you're saying. Single core is pretty consistent between M1 to M1 Ultra

2

Cindexxx t1_iyt65ou wrote

Yeah, talking about the pro line. If they're stuck at M1 single core speeds at desktop level it'll suck for certain applications.

1

Nicebutdimbo t1_iyux4n3 wrote

Err the single core performance of the M1 chips is very high, I think when they were released they were the most powerful single cores available.

1

Eggsaladprincess t1_iytmifw wrote

Hm, I don't see it that was at all.

If we look at how Intel chips scale, we see that single core performance actually decreases on the largest chips. That's why historically the Xeon Mac Pro would actually have a lower single core performance than the similar generation i5 or i7.

Of course the Xeon would more than make up for it by having tons of cores, more PCIe lanes, support for ECC RAM, etc.

I think it would be fantastic if the M1 Supermega or whatever they end up calling the Mac Pro chip matches the M1 single core performance.

0

PBlove t1_iytidxo wrote

It's a tablet with a keyboard.

Mac airs are shit.

Half my office got those from IT.

I got a 4lb Asus work station with an A5000... ;p

(Basically I use it to run freaking CAD software but only to review engineering, hell for fun I run blender renders I set up at home and send over to render in the background while I work.

−8

BlingyStratios t1_iyrsu45 wrote

What was it before? I tried it a couple months ago on an m2 air, an image would take me 15 minutes

3

kallikalev t1_iyvwn8e wrote

A few months ago Stable Diffusion wasn’t running on the GPU on macs, so that was CPU-only

2

whackwarrens t1_iys1b8e wrote

Chips become more power efficient over time so how old is that gpu? And on what node?

If you're comparing an old ass node on a desktop part to Apple's latest and greatest mobile chip the power difference would be insane. Comparable laptop apus from AMD would manage the same, although they use like 65w last I checked.

M2 is on like 4 nanometer. Clearly a desktop pc taking 42 seconds to do basic 50 iteration renders isn't remotely bleeding edge lol.

1

HELPFUL_HULK t1_iyuyofg wrote

I'm using DiffusionBee on an M1 MacBook Air with 8GB of RAM and I'm getting similar time results to your friend, about 40-50 seconds with 50 steps on a 512x512 model.

This is without the optimizations in the article above

1

Va-Va-Vooom t1_iywtdyk wrote

thats not right, my 1080ti takes 11 seconds to do that

1

stealth_pandah t1_iyskm18 wrote

for example, my XPS 17 11th gen i7 and 2060 generates one image in 10 sec on average. I'd say 18 sec is pretty good at this point. M silicon future looks brighter every day.

9

dangil t1_iysr4ln wrote

My 2010 12 core Mac Pro with a Radeon 7970 takes about 5 minutes

2

ben_db t1_iysrvou wrote

You can't compare two different images with different settings

−1

dangil t1_iyt89xx wrote

Every prompt takes the same amount of time

−1

ben_db t1_iyt8la0 wrote

Prompt yes, anything else, no.

SD version, resolution, passes, channels etc, all massively affect performance.

"I take 25 minutes to drive to work and you take 30 so my car is faster"

8

PBlove t1_iyti2t1 wrote

That last part is a great way to out it.

0

BlazingShadowAU t1_iytrjlq wrote

Ngl, as someone who has run stablediff on my own gpu, 18 seconds could either be god awful, average or good depending on the number of steps in the generation. A 15 step generation on my 2070 only takes like 4 seconds and produces perfectly fine results. Think ive gotta go up to like 50+ before reaching 18 seconds.

1

AkirIkasu t1_iyu4v2k wrote

The benchmark they used is 50 steps on a 77 character input, outputting 512x512.

3

nybbleth t1_iyv34xm wrote

I have a 2070s and usually run 40 steps. I'd say that takes maybe about 10 seconds?

1

Starold t1_iytyzdk wrote

not meaningless for those that use the same sw

0

Ykieks t1_iyuzg8v wrote

MacBook Pro's with M1 Max adn 32 GB of RAM timing for generating image without additional parameters using txt2img was around 40-50 seconds IIRC.

0

Defie22 t1_izbjpmr wrote

It was 7 seconds before. Happy mow? 🙂

0

AkirIkasu t1_iys4fxb wrote

The actual writeup by Apple, for those curious.

The actual code for those who want to actually try it out.

78

wakka55 t1_iyu4ocd wrote

I am too stupid to actually try it.

>ERROR: Failed building wheel for tokenizers or error: can't find Rust compiler

WHAT

lol

20

AkirIkasu t1_iyu60ra wrote

You need to have the nightly version of Rust installed. There's an issue linked in the FAQ of the README for the project that has instructions to install it.

14

wakka55 t1_iyu6n3o wrote

Maybe next year I'll give it another shot, for now I give up and go on with my dum dum life

6

Dtfran t1_iyvsxt2 wrote

You no dum dum, you just no coder, no problem 🫶🏼

1

Ill-Poet-3298 t1_iyvdvqy wrote

Same. I had it running prior to the beta 4 release, but now it's broken with the same error.

1

svtscottie t1_iytiq15 wrote

You the real MVP. The github page contains most of the info everyone is complaining the article didn't have.

9

juggarjew t1_iyrtq5p wrote

And I can generate an image in a few second on my Nvidia A4000, this is a meaningless statement given that you can tweak so many settings such that there is no apples to apples comparison going on.

58

AkirIkasu t1_iyu4y2e wrote

From the github page:

> The image generation procedure follows the standard configuration: 50 inference steps, 512x512 output image resolution, 77 text token sequence length, classifier-free guidance (batch size of 2 for unet).

12

S1DC t1_iyszcja wrote

Funny how they don't mention the number of steps/method used. Big difference between 120 steps of Euler vs 20 steps of DDIM

26

sambes06 t1_iys1dfx wrote

Would this work M1 iPads?

4

AkirIkasu t1_iys400c wrote

From the article:

> This leads to some impressively speedy generators. Apple says a baseline M2 MacBook Air can generate an image using a 50-iteration StableDiffusion model in under 18 seconds. Even an M1 iPad Pro could do the same task in under 30 seconds.

17

browndog03 t1_iysn522 wrote

Maybe it’s a time increase who knows?

3

Ethario t1_iyu2im8 wrote

86400 seconds a day divided by 18 seconds per waifu. POG

3

Impossible_Wish_2675 t1_iyui8dc wrote

My Digital Abacus says a few seconds here and there, but no more than that.

2

Gubzs t1_iyvupvq wrote

Lmao Apple is so manipulative. They tout this like it's a good thing.

My 3 year old $900 AMD laptop takes 8-10 seconds to do the same thing.

1

ryo4ever t1_iyuyo1m wrote

Why is it even called stable diffusion? This whole AI mumbo jumbo is confusing as hell…

0

Tarkcanis t1_iyu9dio wrote

If the tech industry could stop using "sciencey" words for their products, that'd be greaate.

−1

Silias_Kato t1_iyv3l6j wrote

Another reason to hate Apple, then.

−1

headloser t1_iyumeqv wrote

And how is that compare to Windows 10 and 11 version?

−2

Draiko t1_iythp1x wrote

Knowing Apple, this method and result has a ton of asterisks on it.

−11

PBlove t1_iythyik wrote

YEP!

Bet it was on a special rig, not a consumer computer.

−10

rakehellion t1_iz0h7al wrote

What does Apple sell that isn't a consumer computer?

3