VirtualHat t1_j9vkpgd wrote
Reply to comment by Jinoc in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That's a good question. To be clear, I believe there is a risk of an extinction-level event, just that it's unlikely. My thinking goes like this.
- Extinction-level events must be rare, as one has not occurred in a very long time.
- Therefore the 'base' risk is very low, and I need evidence to convince me otherwise.
- I'm yet to see strong evidence that AI will lead to an extinction-level event.
I think the most likely outcome is that there will be serious negative implications of AI (along with some great ones) but that they will be recoverable.
I also think some people overestimate how 'super' a superintelligence can be and how unstoppable an advanced AI would be. In a game like chess or Go, a superior player can win 100% of the time. But in a game with chance and imperfect information, a relatively weak player can occasionally beat a much stronger player. The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes. This makes EYs 'AI didn't stop at human-level for Go' analogy less relevant.
Scyther99 t1_j9zomj7 wrote
First point is like saying phishing was nonexistent before we invented computers and internet, so we dont have to worry about it once we invent them. There have been no AGI. There have been no comparable events. Basing it on fact that asteroid killing all life on earth is unlikely does not make sense.
Smallpaul t1_ja6orxv wrote
> occasionally beat a much stronger player
We might occasionally win a battle against SkyNet? I actually don't understand how this is comforting at all.
> The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes.
I might win a single game against a Poker World Champion, but if we play every day for a week, the chances of me winning are infinitesimal. I still don't see this as very comforting.
Viewing a single comment thread. View all comments