Viewing a single comment thread. View all comments

DarthBuzzard t1_j4v72yw wrote

> I don't work with hardware AR or VR but I am sure the problems are not that complex that you make it out to be. I think the limited processing power is what is limiting it. Display technology has matured enough due to smartphones that they should not be the problem.

AR can't use any existing displays in a consumer viable form, and the optics stack has to be invented mostly from scratch. Optics in particular are very difficult because light is so finnicky and difficult to deal with. Then you have to attain a wide field of view, without distortion, somehow produce pure black with 100% transparency, work dynamically at many focal lengths, with HDR in several tens of thousands of nits (even the world's best HDR TV doesn't go beyond 2000), on a all-day or decently long battery life in a pair of glasses without dissipating too much heat, while stabilizing overlayed content with high precision including high precision environment mapping.

And we haven't even gotten into the main input method for AR, which is likely a brain-computer interface (EMG), software complexity and UX design being much harder due to 3D being a much wider canvas for interactions than a 2D screen.

10

tomistruth t1_j4v83b3 wrote

Oh, if you mean AR including a brain monitor than yes, that's a whole different beast. But aren't we still far away from that? Most people understand AR as a wearable headset screen like google glass or hololens.

But I get what you mean, the learning curve is much higher than in smartphone technology in certain aspects. But smartphones themselves were inherently difficult to build too. Not so much the hardware but more the software. They required a whole new operating system build from scratch. If it weren't for google or apple having the manpower, we could still be using clamshell phones even today.

2