Submitted by Shelfrock77 t3_xw9q7q in singularity
Shelfrock77 OP t1_ir56ap1 wrote
“New research from the University of Michigan proffers a way for robots to understand the mechanisms of tools, and other real-world articulated objects, by creating Neural Radiance Fields (NeRF) objects that demonstrate the way these objects move, potentially allowing the robot to interact with them and use them without tedious dedicated preconfiguration.”
Akimbo333 t1_ir5f9ah wrote
This is interesting! Can you elaborate?
Shelfrock77 OP t1_ir5gjwy wrote
It seems like it’s the prequel to teaching a robot how tools work ? That’s my assumption but I could be wrong.
Akimbo333 t1_ir5gw0i wrote
Yeah but I don't understand Neaural Radiance Fields!
Shelfrock77 OP t1_ir5h0dy wrote
“A neural radiance field (NeRF) is a fully-connected neural network that can generate novel views of complex 3D scenes, based on a partial set of 2D images. It is trained to use a rendering loss to reproduce input views of a scene”
ThroawayBecauseIsuck t1_ir6ry5y wrote
Are you a chat bot? What's the weather forecast for NYC tomorrow
Akimbo333 t1_ir8m9mo wrote
Thanks for the info!
love0_0all t1_ir7fs96 wrote
It seems to recognize the parts of the tool and after manipulating it a million times digitally it can understand how it is meant to work in 3D space with our standard physics.
Akimbo333 t1_ir8m7mx wrote
Oh wow!
Lone-Pine t1_ir76l3s wrote
This resembles Numenta's/Jeff Hawkens' theory about how the neocortex works, where the brain keeps thousands of 3D models of different objects in the environment.
Viewing a single comment thread. View all comments