Submitted by Ezekiel_W t3_y0hk8u in singularity
Smoke-away t1_irrx07t wrote
One step closer to real-time video generation.
Google Brain going crazy with the papers lately.
watermelontomato t1_irs0cvm wrote
My 3060 can generate an image with Stable Diffusion in around 10 seconds. If it really is 256x faster, that would be 25.6fps. I doubt the math is so clean and clear cut in reality though.
SituatedSynapses t1_irsltdg wrote
They will discover some unique tricks to interpolate the future frame with the previous frame's render and be able to get that over 30 FPS I bet. The biggest problem I've noticed with AI generation is the huge amounts of VRAM it needs. I really don't know how they're going to get around that and I'm very curious to see what sort of wild tricks they figure out! :)
dasnihil t1_irtbsir wrote
i agree, it does need more VRAM to output faster, but im more excited about upcoming videos that maintain coherency like a proper human made video, then add audio synthesis to it and we all can implement our ideas and create amazing things. even if the render takes time, still amazing improvement to have.
-ZeroRelevance- t1_irvlkdo wrote
Seems like StabilityAI have some ideas for how to reduce it, since they seem pretty confident about getting Stable Diffusion below 1GB of VRAM. We’ll have to wait and see though.
kikechan t1_isaxkfh wrote
Wow, source?
-ZeroRelevance- t1_iscg70s wrote
Emad (the guy in charge of StabilityAI) has been saying on twitter that he thinks they can get Stable Diffusion under a gigabyte of VRAM for a while now. Here’s one of those tweets.
kikechan t1_isduh5j wrote
Thanks!
[deleted] t1_irwntze wrote
[deleted]
Viewing a single comment thread. View all comments