SatoriTWZ
SatoriTWZ t1_jegullm wrote
The greatest danger AI brings is not AI going rogue or unaligned AI. We have no logical reason to believe that AI could go rogue and even though mistakes are natural, I believe that an AI that is advanced enough to really expose us to greater danger is also advanced enough to learn to interpret our orders correctly.
The biggest danger AI brings is not unalignment but actual alignment - with the wrong people. Any technology that can be misused by governments, corporations and the military for destructive purposes will be - so the aeroplane and nuclear fission were used in war and the computer, for all its positive facets, was also used by Facebook, NSA and several others for surveillance.
If AGI is possible - and like many people here I assume it is - then it will come sooner or later more or less of its own accord. What matters now is that society is properly prepared for AGI. We should all think carefully about how we can avoid or at least make it as unlikely as possible that AGI - like nuclear power or much worse - will be abused. Imo, the best way to do this would be through democratisation of society and social change. Education is obviously necessary, because the more people know, the more likely there will be a change. Even if AGI should not be possible, democratisation would hardly be less important, because either way AI will certainly become an increasingly powerful and in the hands of a few therefore increasingly dangerous technology.
Therefore, the most important question is not so much how we achieve AGI - which will come anyway, assumed it is possible - but how we can democratise society, corporations, in a nutshell, the power over AI. It must not be controlled by a few, because that would bring us a lot of suffering.
SatoriTWZ t1_jdsgfxh wrote
Reply to comment by rixtil41 in Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
so better let the minority dictate the majority?^^
SatoriTWZ t1_jdrlaxd wrote
Reply to comment by rixtil41 in Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
some things are more important than efficiency. e.g. egality and freedom.
SatoriTWZ t1_jdmxu2y wrote
Reply to Who do you think will be the winners and losers of the coming AI revolution? by tshirtguy2000
those who have power over the algorithms - governments and companies - will be the winners and basically everyone else will be losers to different degrees.
unless societies change and become much more democratic. and i mean direct democracy, not electing people who then govern everybody else.
plus democratization of economy and workplaces.
i see no other alternative.
SatoriTWZ t1_jeh3zdg wrote
Reply to comment by robertjbrown in How could AI actually cause the extinction of Homo sapiens? by BeNiceToYerMom
but there, the danger lies in the human who controls the ai, not in the ai itself. the ai won't just be like "oh, you know what? i'll just not care about my directions and f* those humans up" but rather produce bad outcomes because of bad directions. but i think that ai is currently way too narrow to impose an existential threat and when it's general enough, it'll imo also be general enough to understand our directions correctly.
unless, of course, someone doesn't or wants it to cause damage and suffering, which is the whole point of my post.