Viewing a single comment thread. View all comments

valdanylchuk OP t1_it752xr wrote

No, not the only one. There is also risk of weponization, resource competition, all sorts of misunderstandings... Sometimes it feels like risks of AI are better researched than the benefits.

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence

However, there are risks with any technology, starting with fire and metal working, and they are just something to guard against, not something to stop us from using the technology to our advantage.

In case of policy making, obviously making AI our God Emperor is not the first step we would jump at. It is about finding some correlations and balancing some equations.

1

WikiSummarizerBot t1_it75484 wrote

Existential risk from artificial general intelligence

>Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1