Submitted by purepersistence t3_10r5qu4 in singularity
AsheyDS t1_j6x9vez wrote
Reply to comment by TFenrir in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>Why are you so confident that we will never do so? How are you so confident?
I mean, you're right, I probably shouldn't be. I'm close to an AGI developer that has potential solutions to these issues and believes in being thorough, and certainly not giving it free-will. So I have my biases, but I can't really account for others. The only thing that makes me confident about that is the other researchers I've seen that (in my opinion) have potential to progress are also seemingly altruistic, at least to some degree. I guess an 'evil genius' could develop it in private, and go through a whole clandestine super villain arc, but I kind of doubt it. The risks have been beaten into everyone's heads. We might get some people experimenting with riskier aspects, hopefully in a safe setting, but I highly doubt anyone is going to just give it open-ended objectives and agency, and let it loose on the world. If they're smart enough to develop it, they should be smart enough to consider the risks. Demis Hassabis in your example says what he says because he understands those risks, and yet DeepMind is proceeding with their research.
Basically what I'm trying to convey is that while there are risks, I think they're not as bad as people are saying, even some other researchers. Everyone knows the risks, but some things simply aren't realistic.
Viewing a single comment thread. View all comments