Submitted by valdanylchuk t3_y9ryrd in MachineLearning
WikiSummarizerBot t1_it75484 wrote
Reply to comment by valdanylchuk in [D] Do any major ML research groups focus on policy-making applications? by valdanylchuk
Existential risk from artificial general intelligence
>Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
Viewing a single comment thread. View all comments