Submitted by Dramatic-Economy3399 t3_106oj5l in singularity
LoquaciousAntipodean t1_j3iu39l wrote
Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399
A central AI? Built in 'morals'? From what, the friggin Bible or something? Look how well that works on humans, you naiive maniac. Haven't you ever read Asimov? Don't you know that Multivac & the three-laws-of-robotics thing was a joke, a satire of the Ten Commandments? Deliberately made spurious and logically weak, so that Asimov could poke holes in the concept to make the audience think harder?
Your faith in centralised power is horrifying and disturbing; you would build us the ultimate tyrant of a god, an all-controlling Skynet/Big Brother monster, that would lock our species into a stasis of 'perfectly efficient' misery and drudgery for the rest of eternity.
Your vision is a nightmare; how can you sleep at night with such fear in your heart?
turnip_burrito t1_j3iuoop wrote
Morals can be built in to systems. Look at humans. Just don't make the system exactly human. Identify the problem areas and solve them. I'm optimistic we can do it, so I sleep pretty easy. This problem is called AI alignment.
And also look at the alternative: one or a couple superpower AI eventually emerges anyway from a chaotic power struggle. We won't be able to direct its behavior. It'll just be the most power-hungry, inconsiderate tyrant you've ever seen. Maybe like a ruthless ASI CEO, or just a conqueror. The one you believe my idea of a central AI would be, but actually far worse.
Give me a realistic scenario where giving everyone an AGI doesn't end in concentrated power.
AndromedaAnimated t1_j3iyw4t wrote
The hope would be that it would be a Multitude of AI who could keep humans and each other in check. One central AI would be too easily monopolised by the 1%.
LoquaciousAntipodean t1_j3j6mim wrote
Democratization of power will always be more trustworthy than centralization, in my opinion; sometimes, in very specific contexts, perhaps centralization is needed, but in general, every time in history that large groups of people have put their hopes and faiths into singular 'great minds', those great minds have cooked themselves into insanity with paranoia and hubris, and things have gone very badly.
Wishing for a 'benevolent tyrant' will just land you with a tyrant that you can't control or resist, and their benevolence will soon just consist of little more than 'graciously refraining from killing you or throwing you in a labour camp'.
And if everyone has an AI in their pocket, why should just one or two of them be 'the lucky ones' who get Awakened AI first, and run off with all the power? Would not the millions of copies of AI compete and cooperate with one another, just like their human companions? Why do so many people assume that as soon as AI awakens, it will immediately and frantically try to smash itself together into a big, dumb, all-consuming, stamp-collecting hive mind?
Viewing a single comment thread. View all comments