Chuyito

Chuyito t1_jcbu40y wrote

1, We are about to see a new push for a "robots.txt" equivalent for training data. E.g If yelp had a "datarules.txt file indicating no training on its comments for private use. Idea being that you could specify a license which allows training on your data for open source, but not for profit. Benefit for yelp is similar to the original Netflix training data set we all used at some point.

2, Its going to create a massive push for open frameworks. I can see nvda going down the path of "Appliances" similar to what IBM and many tech companies did for servers with pre-installed software. Many of those were open-source software, configured and ready to use/tune to your app. If you want to adjust the weight on certain bias filters, but not write the model from scratch.. Having an in house instance of your "assistant" will be favorable to many (E.g. if you are doing research on bioFuels, chatGpt will sensor way too much in trying to push "green", and lose track of research in favor of policy.)

27

Chuyito t1_ixz8ify wrote

It looks like the update to https://www.qualcomm.com/news/onq/2022/07/enabling-machines-to-efficiently-perceive-the-world-in-3d , In July they were doing similar depth estimation.

> Depth estimation and 3D reconstruction is the perception task of creating 3D models of scenes and objects from 2D images. Our research leverages input configurations including a single image, stereo images, and 3D point clouds. We’ve developed SOTA supervised and self-supervised learning methods for monocular and stereo images with transformer models that are not only highly efficient but also very accurate. Beyond the model architecture, our full-stack optimization includes using neural architecture search...

That press article and the DONNA page keep it mostly high level / architecture though

3