Viewing a single comment thread. View all comments

Rogue_Moon_Boy t1_it2nizm wrote

>I think however that you continue to underestimate the chaotic danger and uncertainty of the situation when it comes to AI.

Pretty much every new technology ever in history was doomed as the end of the world initially.

>... as it should be, generally. Pain and anxiety are largely more important for human survival than pleasure and reassurance.

I disagree. It should be 50/50. A pipe dream for sure, but the current exaggeration of impending doom spread by social media and dinosaur media is just creating anxiety everywhere and a generation of doomers for no reason. It's not productive at all. Humans work best when inspired and hopeful, not if they are depressed and hopeless.

1

ouaisouais2_2 OP t1_it3615d wrote

>Pretty much every new technology ever in history was doomed as the end of the world initially.

I doubt that people literally predicted the extinction of humanity or
dystopias in all the colors of the rainbow. Besides, all that shouldn't be a reason to not take serious predictions seriously.

We know there is a risk that is only possible with ASI/wide application of narrow AI. We know it can get unfathomably bad in numerous ways. We know it can only get unfathomably good in relatively few ways. It's highly uncertain how high the chances are that it lands on respectively bad or good.

It's only reasonable to be more patient to spend more time researching what risks we're accepting and how to lower them. I think that's the most reasonable at least on the extremely long-term

1