ThatInternetGuy
ThatInternetGuy t1_isxfzu2 wrote
Reply to [D] Imagic Stable Diffusion training in 11 GB VRAM with diffusers and colab link. by 0x00groot
This Shivam Shrirao guy is super fast! Took him two days to make Dreambooth scripts and now just one day to make Imagic scripts.
ThatInternetGuy t1_isjg31t wrote
ThatInternetGuy t1_is8y9bt wrote
Reply to comment by Ok-Fig903 in Scientists teach brain cells in a dish to play Pong, opening potential path to powerful AI by WikkaOne
I can save you but you have to find the nearest public phone booth, which is about 4000 miles away.
ThatInternetGuy t1_is8i0uj wrote
Reply to comment by Ok-Fig903 in Scientists teach brain cells in a dish to play Pong, opening potential path to powerful AI by WikkaOne
Luckily you have me, ThatInternetGuy coming to rescue you.
ThatInternetGuy t1_is46ghv wrote
Reply to [D] Are GAN(s) still relevant as a research topic? or is there any idea regarding research on generative modeling? by aozorahime
Transformer-based models are gaining traction since 2021 for generative models as you could practically scale up to tens of billions of parameters, whereas GAN-based models are already saturated, not that GAN(s) were any less powerful, as GAN(s) are generally much more efficient in terms of performance and memory.
ThatInternetGuy t1_irmasvp wrote
Reply to comment by Sashinii in Stability AI is making an open source language module! by Akimbo333
Yep, humans have always been fabricating fake news and photoshopped photos, and suddenly AI is dangerous because it can do the same.
ThatInternetGuy t1_ire1zdu wrote
Reply to comment by jzbot4000 in Apple’s New AirPods Are Telling Users to Replace the Batteries Already. Too Bad That’s Impossible by speckz
It should last 5 to 10 years, unless you let it sits with flat empty batteries too often or too long.
ThatInternetGuy t1_irdz1xx wrote
Reply to Google & TUAT’s WaveFit Neural Vocoder Achieves Inference Speeds 240x Faster Than WaveRNN by Dr_Singularity
Hope to see code released soon!
ThatInternetGuy t1_irdy36a wrote
Reply to comment by drizel in [Google AI] AudioLM: a Language Modeling Approach to Audio Generation by Danuer_
Yes, sit back watch AI practice for you. :D
ThatInternetGuy t1_ird8t9h wrote
Reply to comment by drizel in [Google AI] AudioLM: a Language Modeling Approach to Audio Generation by Danuer_
No, this AudioLM thing means you're not needed at all, well after 3 seconds of you playing the guitar, the AI model learns to mimic both your style and your guitar acoustics.
ThatInternetGuy t1_ird8lx3 wrote
Holy cow! I had to play back the samples 5 times and still couldn't tell that AI continued the rest of the clips at all.
2022 is the best year of AI progress.
ThatInternetGuy t1_ir9zlmq wrote
Reply to comment by ReginaldIII in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
This is not the first time RL is used to make efficient routings on the silicon wafers and on the circuit boards. This announcement is good but not that good. 25% improvement in the reduction of silicon area.
I thought they discovered a new Tensor Core design that gives at least 100% improvement.
ThatInternetGuy t1_ir9v9aj wrote
Reply to comment by ReginaldIII in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
Yes, 25% improvement.
My point is, Nvidia CUTLASS has practically improved matrix multiplication by 200% to 900%. Why do you guys think matrix multiplication is currently slow with GPU, I don't get that. The other guy said it's an unsolved problem. There is nothing unsolved when it comes to matrix multiplication. It has been vastly optimized over the years since RTX first came out.
It's apparent that RTX Tensor Cores and CUTLASS have really solved it. It's no coincidence that the recent explosion of ML progresses when Nvidia put in more Tensor Cores and now with CUTLASS templates, all models will benefit from 200% to 900% performance boost.
This RL-designed GEMM is the icing on the cake. Giving that extra 25%.
ThatInternetGuy t1_ir96weg wrote
Reply to comment by master3243 in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
Nvidia Tensor Cores implement GEMM for extremely fast matrix-matrix multiplication. This has never been figured out for ages; however, it's up to the debate if the AI could improve the GEMM design to allow an even faster matrix-matrix multiplication.
Matrix-Matrix Multiplication has never been slow. If it were slow, we wouldn't have all the extremely fast computing of neural networks.
If you were following the latest news of Machine Learning, you should have heard the recent release of Meta's AITemplate which speeds up inference by 3x to 10x. It is possible thanks to the Nvidia CUTLASS team who have made Matrix-Matrix Multiplication even faster.
ThatInternetGuy t1_ir7zmqm wrote
Reply to comment by ReasonablyBadass in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
And GPU is mainly a matrix multiplication hardware. 3D graphics rendering is a parallel matrix multiplication on the 3D model vertices and on the buffer pixels, so it's not really an unsolved problem, as all graphics cards are designed to do extremely fast matrix multiplication.
ThatInternetGuy t1_ir3adnj wrote
Reply to [R] The Illustrated Stable Diffusion by jayalammar
Definitely, the best illustrated article out there.
ThatInternetGuy t1_iqmjuvb wrote
Reply to comment by Apollo24_ in The Age of Magic Has Just Begun by Ohigetjokes
Why do you think Elon is pushing Neuralink?
Elon explains a really simple reason. He believes people using a smartphone is a cyborg as the smartphone is the extension to the human form but it's not apparent because, as he explains, the communication bandwidth between the human form and the machine is limited by how fast you type on the keyboard, so by figuring this bottleneck, Elon explains it why Neuralink will be the answer. Basically, it's a fast communication with huge bandwidth between our human form and the machine, allowing us to stream our thoughts, visuals, ideas and commands to the machine. For safety reasons, the machine may only respond to us via AR glasses and headsets, as to not interfere directly with our brain signals.
So what he wants to do exactly is each person will have a fast AI computer with them linked to the brain wired or wirelessly to the Neuralink brain implant, so basically, everyone has all the abilities to do everything, from being fluent in all world languages to generating arts, designing 3D models, fixing cars, and so on.
ThatInternetGuy t1_itxvv27 wrote
Reply to [D] Poisson Flow Generative Models - new physics inspired generative model by SleekEagle
I'm not surprised at all. Simulated annealing is an important optimization technique that is inspired by the metal annealing process.