Submitted by kdun19ham t3_111jahr in singularity
DukkyDrake t1_j8fvyr5 wrote
Reply to comment by FusionRocketsPlease in Altman vs. Yudkowsky outlook by kdun19ham
A lot of people do make that assumption, but a non-agent AGI doesn't necessarily mean you avoid all of the dangers. Even the CAIS model of AGI doesn't negate all alignment concerns, and I think this is the safest approach and is mostly in hand.
Here are some more informed comments regarding alignment concerns and CAIS, which is what I think we'll end up with by default at the turn of the decade.
Viewing a single comment thread. View all comments