Viewing a single comment thread. View all comments

Surur t1_jefzgf1 wrote

The logical way to prevent the creation of another AGI is to kill everyone. "Anything else is an unacceptable risk, given the buggyness of AI".

1

ribblle OP t1_jego2mx wrote

If you want to minimize the risk of AI, you minimize the actions of AI.

This isn't actually good enough, but it's the best strategy if you're forced to make one.

1