Viewing a single comment thread. View all comments

DukkyDrake t1_j8fvyr5 wrote

A lot of people do make that assumption, but a non-agent AGI doesn't necessarily mean you avoid all of the dangers. Even the CAIS model of AGI doesn't negate all alignment concerns, and I think this is the safest approach and is mostly in hand.

Here are some more informed comments regarding alignment concerns and CAIS, which is what I think we'll end up with by default at the turn of the decade.

3