Submitted by ribblle t3_127u3wc in singularity
Surur t1_jefzgf1 wrote
The logical way to prevent the creation of another AGI is to kill everyone. "Anything else is an unacceptable risk, given the buggyness of AI".
ribblle OP t1_jego2mx wrote
If you want to minimize the risk of AI, you minimize the actions of AI.
This isn't actually good enough, but it's the best strategy if you're forced to make one.
Viewing a single comment thread. View all comments