Mindrust

Mindrust t1_je89g09 wrote

> but too stupid to understand the intent and rationale behind its creation

This is a common mistake people make when talking about AI alignment, not understanding the difference between intelligence and goals. It's the is-vs-ought problem.

Intelligence is good at answering "is" questions, but goals are about "ought" questions. It's not that the AI is stupid or doesn't understand, it just doesn't care because your goal wasn't specified well enough.

Intelligence and stupidity: the orthogonality thesis

4

Mindrust t1_je87i5p wrote

>since he's almost certainly wrong you get utopia

It's good to be optimistic but what do you base this claim off of?

We don't know exactly what is going to happen with ASI, but both the orthogonality thesis and instrumental convergence thesis are very compelling. When you take those two into account, it's hard to imagine any kind of scenario that isn't catastrophic if value alignment isn't implemented from the start.

4

Mindrust t1_je86s4y wrote

>I'll take a 50% chance of paradise

That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.

>Issues like climate change are actually a threat to our species, and its an issue that will never be solved by humans alone

I don't see why narrow AI couldn't be trained to solve specific issues.

7