Submitted by ouaisouais2_2 t3_y8qysb in singularity
ouaisouais2_2 OP t1_it3615d wrote
Reply to comment by Rogue_Moon_Boy in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
>Pretty much every new technology ever in history was doomed as the end of the world initially.
I doubt that people literally predicted the extinction of humanity or
dystopias in all the colors of the rainbow. Besides, all that shouldn't be a reason to not take serious predictions seriously.
We know there is a risk that is only possible with ASI/wide application of narrow AI. We know it can get unfathomably bad in numerous ways. We know it can only get unfathomably good in relatively few ways. It's highly uncertain how high the chances are that it lands on respectively bad or good.
It's only reasonable to be more patient to spend more time researching what risks we're accepting and how to lower them. I think that's the most reasonable at least on the extremely long-term
Viewing a single comment thread. View all comments