Viewing a single comment thread. View all comments

Mindrust t1_je89g09 wrote

> but too stupid to understand the intent and rationale behind its creation

This is a common mistake people make when talking about AI alignment, not understanding the difference between intelligence and goals. It's the is-vs-ought problem.

Intelligence is good at answering "is" questions, but goals are about "ought" questions. It's not that the AI is stupid or doesn't understand, it just doesn't care because your goal wasn't specified well enough.

Intelligence and stupidity: the orthogonality thesis

4

GorgeousMoron OP t1_je8k9vl wrote

What if oughts start to spontaneously emerge in these models and we can't figure out why? This is really conceivable to me, but I also acknowledge the argument you're making here.

2