Submitted by Dramatic-Economy3399 t3_106oj5l in singularity
turnip_burrito t1_j3iuoop wrote
Reply to comment by LoquaciousAntipodean in Organic AI by Dramatic-Economy3399
Morals can be built in to systems. Look at humans. Just don't make the system exactly human. Identify the problem areas and solve them. I'm optimistic we can do it, so I sleep pretty easy. This problem is called AI alignment.
And also look at the alternative: one or a couple superpower AI eventually emerges anyway from a chaotic power struggle. We won't be able to direct its behavior. It'll just be the most power-hungry, inconsiderate tyrant you've ever seen. Maybe like a ruthless ASI CEO, or just a conqueror. The one you believe my idea of a central AI would be, but actually far worse.
Give me a realistic scenario where giving everyone an AGI doesn't end in concentrated power.
AndromedaAnimated t1_j3iyw4t wrote
The hope would be that it would be a Multitude of AI who could keep humans and each other in check. One central AI would be too easily monopolised by the 1%.
LoquaciousAntipodean t1_j3j6mim wrote
Democratization of power will always be more trustworthy than centralization, in my opinion; sometimes, in very specific contexts, perhaps centralization is needed, but in general, every time in history that large groups of people have put their hopes and faiths into singular 'great minds', those great minds have cooked themselves into insanity with paranoia and hubris, and things have gone very badly.
Wishing for a 'benevolent tyrant' will just land you with a tyrant that you can't control or resist, and their benevolence will soon just consist of little more than 'graciously refraining from killing you or throwing you in a labour camp'.
And if everyone has an AI in their pocket, why should just one or two of them be 'the lucky ones' who get Awakened AI first, and run off with all the power? Would not the millions of copies of AI compete and cooperate with one another, just like their human companions? Why do so many people assume that as soon as AI awakens, it will immediately and frantically try to smash itself together into a big, dumb, all-consuming, stamp-collecting hive mind?
Viewing a single comment thread. View all comments