Viewing a single comment thread. View all comments

ArnoF7 t1_j9rzhcc wrote

I must say I am not very involved with the alignment community and do not have much exposure to their discussions, so I may miss some ideas, but as a researcher in robotics I am not super worried about some of his concerns just by reading his post.

Currently there is no clear roadmap in the robotics community to achieve an agent that can autonomously and robustly interact with the unstructured physical world, even just for a relatively specialized environment. Robotics is still very far away from its ChatGPT moment, and I think current socioeconomic conditions are rather adversarial to robotics RD compared to other domain. So such agent will have very limited physical agency.

If you assume current auto-regressive LLMs can somehow lead to a super-intelligent agent and just figure out the robotics/physical interaction problem itself, then sure you could worry about it. But if we assume an omnipotent oracle then we could worry about anything. It’s not so much different from worrying about a scenario in which the law of physics just changes the next instant and all biological creatures will just explode under the new law of physics. I mean it’s possible, just not falsifiable so I wouldn’t worry too much about it.

Btw, I want to stress that I think most of EY’s chain of thoughts that I have the chance to read about are logical. But his assumptions are usually so powerful. When you have such powerful assumptions a lot of things become possible.

Also, I wouldn’t dismiss alignment research in general like many ML researchers do, precisely because I work with physical robots. There are many moments during my experiments I would think to myself “this robot system can be a very efficient killing machine if people really try” or “this system can make many people lose their jobs if it can economically scale”. So yeah in general I think some “alignment” research has its merits. Maybe we should start by addressing some problems that already happened or are very imminent

13

LetterRip t1_j9s7k0n wrote

2

ArnoF7 t1_j9sbjc8 wrote

Yes, I am aware of the paper you linked, although I can’t say I am super familiar with the details.

This is very cool and solves some of the problems with robotics, but not a whole lot. Not discrediting the authors (especially Fei Xia, who I really admire as a robotics researcher. And of course Sergey Levine, who is probably my favorite), but the idea of fusing NLP and robotics to create a robot that can understand command and serve you is not super new. Even 10+ years ago there is this famous video from ROS developer Open Robotics (at the time it was still Willow Garage IIRC) in which they tell the robot to grab a bear and the robot will navigate the entire office and fetch it from the kitchen. Note that this is not the innovation these papers claim, (these papers are actually investigating a possibility instead of solving a problem) but I assume this is probably what everyone assumes to be the bottleneck of service robot, which in reality isn’t.

2

crt09 t1_j9tnr4q wrote

Yeah, GPT was the GPT moment of RL

2

Hyper1on t1_j9vudgi wrote

Just wanted to point out that even if we restrict ourselves purely to an agent that can only interact with the world through the internet, code, and natural language, that does not address the core AI alignment arguments of instrumental convergence etc being dangerous.

2