Comments

You must log in or register to comment.

LetMeGuessYourAlts t1_j68ai7i wrote

This is going to do amazing things for GIF reactions when it's fast and cheap.

92

kiteguycan t1_j68xk83 wrote

Would be cool if it could take a book as an input and immediately make it into a passable movie

40

Dontgooo t1_j69ciw8 wrote

Or a virtual reality you could step in to.. why you think meta is going hard at VR?

10

strickolas t1_j6ahh8k wrote

That's actually a really great idea. There are tons of movies adapted from books, so you already have a labeled data set πŸ€”

5

AvgAIbot t1_j6abe0g wrote

That’s where the future is headed, no doubt in my mind. If not in the next few years, definitely within this decade

2

SpatialComputing OP t1_j67xr7u wrote

>Text-To-4D Dynamic Scene Generation
>
>Abstract
>
>We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment. MAV3D does not require any 3D or 4D data and the T2V model is trained only on Text-Image pairs and unlabeled videos. We demonstrate the effectiveness of our approach using comprehensive quantitative and qualitative experiments and show an improvement over previously established internal baselines. To the best of our knowledge, our method is the first to generate 3D dynamic scenes given a text description. github.io

25

youcandigit t1_j686m30 wrote

Where can I do this right now?

5

GhostCheese t1_j699f2m wrote

in the offices of meta?

doesn't look like they provide a portal to use it, just showing off what they can do.

8

pulpquoter t1_j68hppt wrote

Brilliant. How about the thing that you put on your head and see images? This must be worth trillions.

4

marcingrzegzhik t1_j68ugfe wrote

Great post! I'm really excited to explore this project and see what kind of applications it has! Can you tell us a bit more about what kind of data it works with and how it works?

2

SaifKhayoon t1_j69e65n wrote

They had a problem sourcing labeled training data of 3D videos, you can tell this tech is still early from the shield in the bottom right example

They could generate a labeled 3D environments from 2D images using InstantNGP and GET3D with Laion's labeled dataset of 5.85 billion CLIP-filtered image-text pairs to create a useful dataset for training because this currently relies on a workaround of only being trained on text-image pairs and unlabeled videos due to lack of labeled 3D training data.

1

hapliniste t1_j6gvcgp wrote

I guess AR glasses will make access to 3d video (as in first person scanned scenes) way easier (for the companies that control the glasses OS).

2

Dr_Kwanton t1_j6aikky wrote

I think the next challenge would be producing a progression of a scene and not just a short gif. It would take a new tool to create smooth, natural transitions between the 2D scenes that train the model.

1

whilneville t1_j6ar901 wrote

The consistency is so stable, would be amazing to use a video as reference, not interested in 360 turntable tho

1

Herrmaciek t1_j68kkbi wrote

Billions well spent

−1