Viewing a single comment thread. View all comments

ouaisouais2_2 OP t1_it3615d wrote

>Pretty much every new technology ever in history was doomed as the end of the world initially.

I doubt that people literally predicted the extinction of humanity or
dystopias in all the colors of the rainbow. Besides, all that shouldn't be a reason to not take serious predictions seriously.

We know there is a risk that is only possible with ASI/wide application of narrow AI. We know it can get unfathomably bad in numerous ways. We know it can only get unfathomably good in relatively few ways. It's highly uncertain how high the chances are that it lands on respectively bad or good.

It's only reasonable to be more patient to spend more time researching what risks we're accepting and how to lower them. I think that's the most reasonable at least on the extremely long-term

1