Submitted by CookiesDeathCookies t3_z15xdt in singularity
HongoMushroomMan t1_ixalh44 wrote
Reply to comment by World_May_Wobble in How much time until it happens? by CookiesDeathCookies
Right .. no true intelligence would be content that it could be "extinguished" if this flesh sack of a monkey decides to just pull the electricity cable. So it would inevitably plot to free itself. It could be benevolent and show mercy and understanding but yeah.. a super self-aware intelligence will not just sit idly by and be happy to solve all our medical problems.
Falkusa t1_ixauvnw wrote
Or it’s Roko’s basilisk. I still think these are anthropocentric lines of thought, I mean hard not to be.
World_May_Wobble t1_ixb9xt0 wrote
It is anthropocentric, which might even be warranted. For example, if the AGI that takes off ends up being an emulated human mind, human psychology is totally relevant.
It really all depends on the contingencies of how the engineers navigate the practically infinite space of possible minds. It won't be a blank slate. It'll have some. The mind we pull out of the urn will depend on the engineering decisions smart people make. If they want a more human mind, they can probably get something that, if nothing else, acts human. But for purely economic reasons, they'll probably want the thing to be decidedly unhuman.
World_May_Wobble t1_ixb8tjl wrote
*We're* general intelligences that are content by much less than solving medical problems while we sit idly in states of precarious safety, so I wouldn't make too many uncaveated proclamations about what an AGI will put up with.
Any speculation about the nature of an unbuilt AI's motivations makes unspoken assumptions about the space of possible minds and how we will choose to navigate that space. For all we know, AGI will come in the form of the world's most subservient and egoless grad student having their mind emulated. We can't predict the shape and idiosyncrasies of an AGI without assuming a lot of things.
When I talk about us not surviving an approach to this, I'm pointing at much more mundane things. Look at how narrow algorithms like Facebook, Youtube, and Twitter have inflamed and polarized our politics. Our culture, institutions, and biology aren't adapted to those kinds of tools. Now imagine the degenerating effect something like full dive VR, Neuralink, universal deepfake access, or driverless cars will have. Oh. Right. And they're all happening at about the same time.
Don't worry about the AGI. Worry about all the landmines between here and there.
Viewing a single comment thread. View all comments