Comments
hrkrx t1_iyrp1m8 wrote
My not further defined calculation machine(TM) can generate an image of unknown size in less then a random amount of time.
ben_db t1_iyrq17d wrote
Wow, that's an amount of time different to the previous calculation machine!
doremonhg t1_iys08wh wrote
Definitely one of the calculation machine ever made
[deleted] t1_iysj2yb wrote
[removed]
AutoSlashS t1_iyryz50 wrote
Well,.it's impressive.
[deleted] t1_iysj1oa wrote
[removed]
[deleted] t1_iysj19r wrote
[removed]
Themasterofcomedy209 t1_iyruepz wrote
My digital electronic programmable machine consisted simply of six hydrocoptic marzelvanes, so fitted to the ambifacient lunar waneshaft that sidefumbling was effectively prevented.
[deleted] t1_iysj3ln wrote
[removed]
[deleted] t1_iysj0o2 wrote
[removed]
[deleted] t1_iyrq0sy wrote
[deleted]
ben_db t1_iyrqqfk wrote
The problem is, stable diffusion isn't a fixed length operation, yes it's 50 iterations but those iterations will vary massively based on the input term, output resolution, channels as well as about 10 other settings.
AkirIkasu t1_iyu41du wrote
If you go to the actual github project you can see the full benchmarks and settings.
ben_db t1_iyukygi wrote
They should give comparisons in the article, that's the point.
Are Apple users just fine with this? It seems to happen a lot for Apple products.
Always "30% better" or "twice the performance" but never any actual meaningful numbers.
designingtheweb t1_iyvj0b5 wrote
TIL 9to5mac = Apple
ben_db t1_iyvm7c9 wrote
They do it just as much as Apple do, it seems common to Apple devices
rakehellion t1_iz0fo89 wrote
No.
ben_db t1_iz0qla5 wrote
Well thought out argument, well done
rakehellion t1_iz1es2v wrote
What can be asserted without evidence can be refuted without evidence.
[deleted] t1_iysj4pk wrote
[removed]
Spirit_of_Hogwash t1_iysr7mx wrote
In the ars technica article they say that with a rtx 3060 it takes 8 seconds and with the M1 ultra 9 seconds.
So once again Apple's "fastest in the world" claims are defeated by a mid-range GPU.
dookiehat t1_iyt3gjg wrote
I think it is a software or compiler (?) issue. Stable Diffusion was written for nvidia gpus w cuda cores. Idk what sort of translation happens but it probably leads to inefficiencies not experienced with nvidia.
sylfy t1_iytgr3x wrote
CUDA and the accompanying cudnn libraries are highly specialised hardware and software libraries for machine learning tasks provided by Nvidia, that they have been working on over the past decade.
It’s the reason Nvidia has such a huge lead in the deep learning community, and the reason that their GPUs are able to command a premium over AMD. Basically all deep learning tools are now designed and benchmarked around Nvidia and CUDA, with some also supporting custom built hardware like Google’s TPUs. AMD is catching up, but the tooling for Nvidia “just works”. This is also the reason people buy those $2000 3090s and 4090s, not for gaming, but for actual work.
Frankly, the two chips are in completely different classes in terms of power draw and what they do (one is a dedicated GPU, the other is a whole SoC), it’s impressive that the M1/M2 even stays competitive.
DiscoveryOV t1_iytqahn wrote
Fastest in the world in their class.
I don’t see any ultrabooks with a 3060 in them, nor any even close to as powerful as a fanless 20w one.
vandalhearts t1_iyvioba wrote
The article compares an M1 Ultra 64 Core to a RTX 3060. That's a desktop system (M1 Studio) which starts @ $5k USD.
Spirit_of_Hogwash t1_iytsbuk wrote
I dont see any ultrabook or even 5kg laptop with a M1 ultra either.
Edit: you know what actually you can buy many ultrabooks with the RTX 3060 ( Asus ROG zephyrus G14, Dell XPS, razer blade 14 and many more <20mm thick laptops) while Apple laptops's gpu is at best half a m1ultra.
So yeah talk about fanboys who cant even google.
[deleted] t1_iyuivy3 wrote
[deleted]
AkirIkasu t1_iyu4g6q wrote
You never will, given that ultrabook is a trademark of Intel.
Spirit_of_Hogwash t1_iyu6yj7 wrote
The previous fanboy said ultrabook when everyone else was comparing desktop to desktop.
But it turns out the rtx 3060 is available in many ultrabooks but the m1ultra is not available in any laptop format.
kent2441 t1_iyu5dkz wrote
Apple has never said their GPUs were the fastest in the world. Why are you lying?
Spirit_of_Hogwash t1_iyu5xnj wrote
https://birchtree.me/content/images/size/w960/2022/03/M1-Ultra-chart.jpeg
Dude, Apple is always claiming fastest in the world .
In this specific case Apple DID claim that they are faster than the "highest end discrete GPU" while in this and most real world tests is roughly equivalent to a midrange Nvidia GPU.
You should ask yourself why Apple is the one who lies and you believe them without checking the reality.
Avieshek OP t1_iyrsbe3 wrote
The M1 MacBook Air... is a fanless, ultra lightweight laptop with no dedicated GPU and 20-hour battery life.…. I’d say that’s pretty impressive when we are yet to see a Mac Pro on Silicon.
Cindexxx t1_iyrwekl wrote
I've just been wondering if the Pro haven't used Apple silicon because it doesn't scale up to it. Their chips are insanely impressive, but can that 20W thing scale up to 120W and actually have 5-6x the power? And if it can, why haven't they done it?
Avieshek OP t1_iyrxwzr wrote
There’s already been benchmark leaks with 96GB of RAM, there’s a Covid-situation going on in China currently and likely the launch has been postponed to the end of the financial year.
[deleted] t1_iysj76s wrote
[removed]
AkirIkasu t1_iys6fy9 wrote
Perhaps? The M1 Ultra is basically two M1 chips glued together with a bunch of extra GPU cores.
There isn't an M2 Ultra right now, but it's probably only a matter of time until that gets released.
Eggsaladprincess t1_iysrcum wrote
I think M1 Max is basically 2 M1 chips and M1 Ultra is basically 4 M1 chips
StrangeCurry1 t1_iyvgj5g wrote
The M1 Max is an M1 Pro with extra Gpu cores
The M1 Ultra is 2 M1 Max’s
The Mac Pro is expected to have a chip made of 2 M1 Ultras
Cindexxx t1_iysxx32 wrote
Isn't that going to limit the single core to being not much higher than the original M1? Maybe with more power and cooling they can crank it up a bit, but it seems like that's the limit.
Eggsaladprincess t1_iyt5x7v wrote
Not really sure what you're saying. Single core is pretty consistent between M1 to M1 Ultra
Cindexxx t1_iyt65ou wrote
Yeah, talking about the pro line. If they're stuck at M1 single core speeds at desktop level it'll suck for certain applications.
Nicebutdimbo t1_iyux4n3 wrote
Err the single core performance of the M1 chips is very high, I think when they were released they were the most powerful single cores available.
Eggsaladprincess t1_iytmifw wrote
Hm, I don't see it that was at all.
If we look at how Intel chips scale, we see that single core performance actually decreases on the largest chips. That's why historically the Xeon Mac Pro would actually have a lower single core performance than the similar generation i5 or i7.
Of course the Xeon would more than make up for it by having tons of cores, more PCIe lanes, support for ECC RAM, etc.
I think it would be fantastic if the M1 Supermega or whatever they end up calling the Mac Pro chip matches the M1 single core performance.
Avieshek OP t1_iyso0h3 wrote
No, that’s M1 Max
[deleted] t1_iysj7k7 wrote
[removed]
[deleted] t1_iysj6na wrote
[removed]
[deleted] t1_iysj62e wrote
[removed]
Stanley--Nickels t1_iys0iti wrote
.
[deleted] t1_iysj7zz wrote
[removed]
PBlove t1_iytidxo wrote
It's a tablet with a keyboard.
Mac airs are shit.
Half my office got those from IT.
I got a 4lb Asus work station with an A5000... ;p
(Basically I use it to run freaking CAD software but only to review engineering, hell for fun I run blender renders I set up at home and send over to render in the background while I work.
BlingyStratios t1_iyrsu45 wrote
What was it before? I tried it a couple months ago on an m2 air, an image would take me 15 minutes
kallikalev t1_iyvwn8e wrote
A few months ago Stable Diffusion wasn’t running on the GPU on macs, so that was CPU-only
[deleted] t1_iysj58t wrote
[removed]
whackwarrens t1_iys1b8e wrote
Chips become more power efficient over time so how old is that gpu? And on what node?
If you're comparing an old ass node on a desktop part to Apple's latest and greatest mobile chip the power difference would be insane. Comparable laptop apus from AMD would manage the same, although they use like 65w last I checked.
M2 is on like 4 nanometer. Clearly a desktop pc taking 42 seconds to do basic 50 iteration renders isn't remotely bleeding edge lol.
[deleted] t1_iysj8kr wrote
[removed]
[deleted] t1_iysj42y wrote
[removed]
maxhaton t1_iythn1q wrote
It can absolutely draw more than 20W, no?
[deleted] t1_iytiutm wrote
[deleted]
HELPFUL_HULK t1_iyuyofg wrote
I'm using DiffusionBee on an M1 MacBook Air with 8GB of RAM and I'm getting similar time results to your friend, about 40-50 seconds with 50 steps on a 512x512 model.
This is without the optimizations in the article above
Va-Va-Vooom t1_iywtdyk wrote
thats not right, my 1080ti takes 11 seconds to do that
stealth_pandah t1_iyskm18 wrote
for example, my XPS 17 11th gen i7 and 2060 generates one image in 10 sec on average. I'd say 18 sec is pretty good at this point. M silicon future looks brighter every day.
dangil t1_iysr4ln wrote
My 2010 12 core Mac Pro with a Radeon 7970 takes about 5 minutes
BlazingShadowAU t1_iytrjlq wrote
Ngl, as someone who has run stablediff on my own gpu, 18 seconds could either be god awful, average or good depending on the number of steps in the generation. A 15 step generation on my 2070 only takes like 4 seconds and produces perfectly fine results. Think ive gotta go up to like 50+ before reaching 18 seconds.
rakehellion t1_iz0evl3 wrote
Also, they don't even say which model of Mac.
Starold t1_iytyzdk wrote
not meaningless for those that use the same sw
Ykieks t1_iyuzg8v wrote
MacBook Pro's with M1 Max adn 32 GB of RAM timing for generating image without additional parameters using txt2img was around 40-50 seconds IIRC.
[deleted] t1_iyvcxjj wrote
[deleted]
ben_db t1_iyvdajl wrote
With identical settings?
Defie22 t1_izbjpmr wrote
It was 7 seconds before. Happy mow? 🙂
[deleted] t1_iysk5qi wrote
[deleted]
muffdivemcgruff t1_iyum5nx wrote
Cool, now put that into an iPad that barely sips wattag.
AkirIkasu t1_iys4fxb wrote
The actual writeup by Apple, for those curious.
The actual code for those who want to actually try it out.
wakka55 t1_iyu4ocd wrote
I am too stupid to actually try it.
>ERROR: Failed building wheel for tokenizers or error: can't find Rust compiler
WHAT
lol
AkirIkasu t1_iyu60ra wrote
You need to have the nightly version of Rust installed. There's an issue linked in the FAQ of the README for the project that has instructions to install it.
wakka55 t1_iyu6n3o wrote
Maybe next year I'll give it another shot, for now I give up and go on with my dum dum life
Dtfran t1_iyvsxt2 wrote
You no dum dum, you just no coder, no problem 🫶🏼
ObjectiveDeal t1_iyw76yz wrote
Can I do this with the new iPad Pro m2
Ill-Poet-3298 t1_iyvdvqy wrote
Same. I had it running prior to the beta 4 release, but now it's broken with the same error.
svtscottie t1_iytiq15 wrote
You the real MVP. The github page contains most of the info everyone is complaining the article didn't have.
[deleted] t1_iysj9az wrote
[removed]
[deleted] t1_j03jlya wrote
[removed]
juggarjew t1_iyrtq5p wrote
And I can generate an image in a few second on my Nvidia A4000, this is a meaningless statement given that you can tweak so many settings such that there is no apples to apples comparison going on.
AkirIkasu t1_iyu4y2e wrote
From the github page:
> The image generation procedure follows the standard configuration: 50 inference steps, 512x512 output image resolution, 77 text token sequence length, classifier-free guidance (batch size of 2 for unet).
muffdivemcgruff t1_iyum8m7 wrote
Welp his GPU is fast, maybe not his brain so much.
Aozora404 t1_iyt1kl0 wrote
Hehe apples
[deleted] t1_iysj9q7 wrote
[removed]
S1DC t1_iyszcja wrote
Funny how they don't mention the number of steps/method used. Big difference between 120 steps of Euler vs 20 steps of DDIM
CatWeekends t1_iyufwml wrote
S1DC t1_iyul16q wrote
That's a reasonable amount on apple silicon in 18 seconds. I get 50 steps DDIM at 512x512 in about six seconds on a RTX 3080 10gb.
sambes06 t1_iys1dfx wrote
Would this work M1 iPads?
AkirIkasu t1_iys400c wrote
From the article:
> This leads to some impressively speedy generators. Apple says a baseline M2 MacBook Air can generate an image using a 50-iteration StableDiffusion model in under 18 seconds. Even an M1 iPad Pro could do the same task in under 30 seconds.
[deleted] t1_iysjb5e wrote
[removed]
[deleted] t1_iysjalj wrote
[removed]
browndog03 t1_iysn522 wrote
Maybe it’s a time increase who knows?
Ethario t1_iyu2im8 wrote
86400 seconds a day divided by 18 seconds per waifu. POG
Impossible_Wish_2675 t1_iyui8dc wrote
My Digital Abacus says a few seconds here and there, but no more than that.
[deleted] t1_iysjusr wrote
[deleted]
Ok_Marionberry_9932 t1_iyvrf6g wrote
Wow I’m not impressed. My 2070 super does better
rakehellion t1_iz0hisf wrote
This is a mobile GPU.
Gubzs t1_iyvupvq wrote
Lmao Apple is so manipulative. They tout this like it's a good thing.
My 3 year old $900 AMD laptop takes 8-10 seconds to do the same thing.
RiteMediaGroup t1_iyw6ucd wrote
That’s actually really slow
[deleted] t1_iz0zghs wrote
[deleted]
ryo4ever t1_iyuyo1m wrote
Why is it even called stable diffusion? This whole AI mumbo jumbo is confusing as hell…
[deleted] t1_j1l1m37 wrote
[removed]
Tarkcanis t1_iyu9dio wrote
If the tech industry could stop using "sciencey" words for their products, that'd be greaate.
Silias_Kato t1_iyv3l6j wrote
Another reason to hate Apple, then.
headloser t1_iyumeqv wrote
And how is that compare to Windows 10 and 11 version?
Draiko t1_iythp1x wrote
Knowing Apple, this method and result has a ton of asterisks on it.
PBlove t1_iythyik wrote
YEP!
Bet it was on a special rig, not a consumer computer.
rakehellion t1_iz0h7al wrote
What does Apple sell that isn't a consumer computer?
ben_db t1_iyrmvzu wrote
I can forgive them not giving a comparison to other architectures but why don't they give a reference to the timing before the optimisations? 18 seconds in meaningless.