Submitted by BeNiceToYerMom t3_1280f3o in Futurology
SatoriTWZ t1_jegullm wrote
The greatest danger AI brings is not AI going rogue or unaligned AI. We have no logical reason to believe that AI could go rogue and even though mistakes are natural, I believe that an AI that is advanced enough to really expose us to greater danger is also advanced enough to learn to interpret our orders correctly.
The biggest danger AI brings is not unalignment but actual alignment - with the wrong people. Any technology that can be misused by governments, corporations and the military for destructive purposes will be - so the aeroplane and nuclear fission were used in war and the computer, for all its positive facets, was also used by Facebook, NSA and several others for surveillance.
If AGI is possible - and like many people here I assume it is - then it will come sooner or later more or less of its own accord. What matters now is that society is properly prepared for AGI. We should all think carefully about how we can avoid or at least make it as unlikely as possible that AGI - like nuclear power or much worse - will be abused. Imo, the best way to do this would be through democratisation of society and social change. Education is obviously necessary, because the more people know, the more likely there will be a change. Even if AGI should not be possible, democratisation would hardly be less important, because either way AI will certainly become an increasingly powerful and in the hands of a few therefore increasingly dangerous technology.
Therefore, the most important question is not so much how we achieve AGI - which will come anyway, assumed it is possible - but how we can democratise society, corporations, in a nutshell, the power over AI. It must not be controlled by a few, because that would bring us a lot of suffering.
robertjbrown t1_jeh148m wrote
>We have no logical reason to believe that AI could go rogue
I think what Bing chat did shows that yes, we do have a logical reason to think that. And this is when it is run by companies (Microsoft and OpenAI) that really, really didn't want it doing things like that. Wait till an AI is run by some spammer or scammer the like who just doesn't care.
It could be as simple as someone giving it the goal of "increase my profits", and it finds a way to do it that disregards such things as "don't cause human misery" or the like.
SatoriTWZ t1_jeh3zdg wrote
but there, the danger lies in the human who controls the ai, not in the ai itself. the ai won't just be like "oh, you know what? i'll just not care about my directions and f* those humans up" but rather produce bad outcomes because of bad directions. but i think that ai is currently way too narrow to impose an existential threat and when it's general enough, it'll imo also be general enough to understand our directions correctly.
unless, of course, someone doesn't or wants it to cause damage and suffering, which is the whole point of my post.
Viewing a single comment thread. View all comments