Comments

You must log in or register to comment.

Black_RL t1_iurdwgb wrote

Good to hear META doing good things.

35

ninjasaid13 t1_ius8fwt wrote

META should abandon metaverse and go into AI.

21

was_der_Fall_ist t1_iusadw2 wrote

My understanding is that Meta views AI as essential to the success of the metaverse, and they thus are heavily investing in both.

32

[deleted] t1_iute4od wrote

Two points:

a) As others have said, I think they view these as interrelated things, and they both involve a high-level of "hardtech" R&D. Facebook is one of (if not the largest) purchaser and user of GPUs in the US, and they recently talked about quintupling the number of GPUs they have in datacenters.

b) The negative opinion of Facebook's "Metaverse" initiative is coming from three groups:

  • Shareholders. They'd rather Meta just find a way to turn the ad revenue taps back on, and stop spending money on R&D for stuff that doesn't make money right now. This is.. probably shortsighted, that status quo is permanently gone. They can either pivot, and find a way to own a new hardware platform, or pretend that status quo hasn't changed for the next 10 years, and then eventually end up like Yahoo!, AOL, MySpace, etc.

  • Average people who don't like Zuckerberg/Facebook. Doesn't matter what Meta does, they'll dislike it. I'd argue this metaverse stuff is the least useless or toxic work that's happened at Meta in a long time, I'm fine if they spend shareholder money on this over A/B testing whatever new dark pattern they've discovered to manipulate the dopamine of teenagers.

  • People who are seeing MVPs and extrapolating to the finished product on the basis of that. The stuff that's out there is just what exists today. The stuff they're clearly planning to do is way more ambitious. I may not be a user, but they're training hundreds of SWEs that will eventually go start competitors, and build the next version of the internet. People who think this is dumb and won't be very important are like the people that didn't understand the internet in the early 90s.

16

ninjasaid13 t1_iutg1st wrote

I'm not seeing any revolutionary technology with the metaverse; if it's virtual reality, I'm seeing separate platforms that do that better like vrchat. I'm not really sure who it's for.

1

[deleted] t1_iutqjzr wrote

> I'm not seeing any revolutionary technology with the metaverse

You don't think real-time, 3D, social experiences are more compelling, useful, and better than 2D ones, or that developing those doesn't require technology that will be revolutionary? You can look at some of the stuff they're doing, and willing to show right now, and it looks reasonably compelling to me - Codec Avatars, high resolution scanning of objects into VR/AR, etc.

> I'm seeing separate platforms that do that better like vrchat.

There's a significant difference between a piece of software that's designed to leverage only existing hardware and software capabilities to let people use them to have "voice chat with avatars", and the kind of hardware and software work that a first-party headset company like Meta can spend billions of dollars developing. "VR Chat" doesn't have the financial leverage to drive the development of VR as a technology long-term, it just uses whatever exists already to make a proof-of-concept (anarchic) social experience, which can really only go as far as Unity and existing hardware can. It doesn't exist at all without other, much larger, companies making all the hardware, APIs and engines for them to use (and then actually letting them use them). It (correctly) exploited the market opportunity that was created when none of the companies that released headsets launched with a compelling first-party social experience, but I virtually guarantee that it disappears altogether once these companies start to devote financial firepower to competing with them for mindshare, because nobody that makes a headset is going to eschew a first-party social experience ever again, especially as in-headset cameras for face-tracking become the norm.

Eventually, when the entire space is more mature, there will probably be interest in an "open social platform" again, but I don't expect early competitors like VR Chat to be able to keep up as the space rapidly progresses and fragments over the next few years, and as more platform are added (notably, Apple). I expect we'll have a large number of 'walled gardens' develop and diverge, and then eventually reconverge toward open platforms, when the business opportunity becomes large enough to attract talent and major investment, as with social media in the 2000s.

> I'm not really sure who it's for.

I agree, I don't think Meta has articulated their vision well. That said, I think VR today is basically a "dorky precursor", with bad UX and palatable primarily to technology enthusiasts, to the VR of tomorrow, in the way that BBS/Usenet/IRC were the dorky precursors to the version of the internet that exists today, and that has a UX that is palatable to everyone.

8

ninjasaid13 t1_iutuvgh wrote

I watched the video and it seems they have a lot of cool technology but unfortunately it seems that none of it was actually used and what they showed didn't wow anyone, if it was me in charge I would use some of the technology shown in the video in the actual metaverse to impress and build hype instead of what we got which in many cases is worse than the technology we have today.

I can't imagine the connection between what we got and what they have in the labs.

4

[deleted] t1_iuu6k35 wrote

I think most of what they demoed is in the phase of "technically possible, but not consumer-ready yet".

Like, Codec Avatars. They initially accomplished 1.0 with a big camera-sphere. Neat, but not practical. We can't have every person visit a commercial camera-sphere to get an avatar.

So then they figure out how to do it in a way similar to FaceID - take a video of your face from a bunch of sides with a smartphone, and then do a bunch of photogrammetry post-processing on it, and build a map of the user's face. Consumers can do that with devices they have today. I think they've still said it takes many hours of processing, and then Codec 2.0 still requires the elongated headset they showed the other man using to animate their mouth properly, but I think that's what's coming for consumers in the future, and now that they're sure it's technically possible, they can start to optimize toward that very desirable endpoint, to achieve this result more quickly and easily.

Now, they also have to combine this stuff with high-res environments, to avoid this being too uncanny; you don't want your high-res avatars in a cartoon environment. So this is where item scanning comes in. Starts small, same basic technology as face-scanning, but ends with a user being able to digitally import a whole room, or an intersection of a major city, or whatever.

Luckily, game engines and hardware are "cooperating" with this timeline. You can look at Unreal Engine 5 demos, like Matrix City or the Train Station to see where that will be in the near future. Intel and Nvidia are constantly out there showing new "real-time raytracing" demos (for example, and this) as lighting continues to be optimized as well.

> I can't imagine the connection between what we got and what they have in the labs.

If I was to hazard a guess, it's partly them struggling to normalize/introduce it to people, and partly producing an MVP so they can observe how people 'use' it, and iterate as they discover what the real sticking points of the tech are. I think everyone knows that VR has an "input mechanism problem", in a number of places, and you can see them moving toward fixing it.

From a "hands" perspective, they introduced tracked controllers as the obvious MVP, but they're clearly also examining what the minimum necessary hand tracking is to allow a user complex and useful input options, in a way that's unobtrusive and intuitive, using on-device processing of small motor movements.

You instinctively want to "move" in VR, but this isn't compatible with the average person's real environment. If you virtualize movement, you end up with an inner-ear disconnect, and this makes people sick. Many companies, including Meta, are choosing native AR as medium short-term solution, to marry the virtual and real environments together, so the user can navigate their real environment safely, since nobody but the enthusiasts are willing or able to have a "VR room" to facilitate safe movement.

2

Artanthos t1_iusq4u1 wrote

Two sides of the same coin as far as Meta is concerned.

13

ObjectiveDeal t1_iuupf2a wrote

If they can figure out the metaverse they would already had control of ai

1

ihateshadylandlords t1_iurblnq wrote

> "The ESM Metagenomic Atlas will enable scientists to search and analyze the structures of metagenomic proteins at the scale of hundreds of millions of proteins," the Meta research team wrote on Tuesday. "This can help researchers to identify structures that have not been characterized before, search for distant evolutionary relationships, and discover new proteins that can be useful in medicine and other applications."

Very cool, excited to see where this goes.

!RemindMe 5 years

16

generallyanoaf t1_iurhjqv wrote

What's happening with CASP15? Can we expect to see this and a new AlphaFold?

6

HeinrichTheWolf_17 t1_iustpj0 wrote

Didn’t Deepmind already solve folding?

3

Better_Engine_8537 t1_iusx4tq wrote

Folding is not solved. Alphafold uses results of experimentally determined folds to make predictions of similar proteins. It fails with an arbitrary string of amino acids. It also predicts the final structure rather than how it actually folds. There's alot they are still working on, though.

14

styxboa t1_ivbtzsu wrote

can you explain to me what folding is (zoomed out view), why we can't solve it yet, what it would take to solve it, why it's important to solve, estimates on when it will be solved, etc?

Every article I read on it is so complex that i'm not sure exactly what it is in the first place or why it's important

1

Better_Engine_8537 t1_iy25xwx wrote

I'm really not an expert on folding. I see it like this. Imagine taking everything in your pots and pans drawer out and tying them together with string. What would be the best arrangement for them to fit back in the drawer? The string makes it hard to maneuvere the pans around and the order matters a lot. Solutions to fit them into a drawer would be hard to calculate, but the solution would be that which would fit in the smallest drawer. Proteins are a string of amino acids which can move in many different ways. They form a structure which is in the minimum energy state. To solve it, you guess a structure and calculate the energy. You do this for all possible structures. Nature somehow always picks the structure with the lowest energy. Structures can also be determined experimentally. We can't solve it yet because the calculations take so long using the models we have. Using AI they look at previously determined results to make pretty good guesses for other proteins. To solve it I would guess the AI model needs more, maybe different or additional data. Or mathematical models and calculations could be improved or everything combined. I don't know. It is important because things like curing cancer and killing viruses can be done with a better understanding of how proteins interact with each other of which folding is a part.

1

Talkat t1_iuu7hef wrote

Why the heck would meta go and do work that is already mostly accomplished when they could tackle a new problem??

−2

imnos t1_ius8zt2 wrote

Faster than..what? How does this compare to what DeepMind accomplished with AlphaFold?

2