Submitted by Dramatic-Economy3399 t3_106oj5l in singularity
AndromedaAnimated t1_j3j57yv wrote
Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399
What I see in you is that you are a good person. This is not in question. This is actually the very reason why I am trying to convince someone like you - someone talented with words and with a strong inner moral code, who could use their voice to reach the masses.
Where I see the danger is that the very ones whom you see as „evil“ can - and already do - brainwash talents like you to step in on THEIR cause. That’s why I am contradicting you so vehemently.
While I see reason in your answers, there is a long way to go to ensure that this reasoning also gets heard properly. For this, we need to not appeal to fear but to morals (=> your argument about ensuring that developers and owners should be ethical thinkers is very good here). It would be easier to reach truth by approximation, deploying AGI to multiple people and seeing the moral reasoning evolve naturally. Concentration of power is too dangerous imo.
Hacking is now already done by „soft“ approach mostly, that’s why I mentioned it. Phishing is much easier and requires less resources than brute force today. Just lead them on, promise them some wireheading, and they go scanning the QR codes…
Hacking the software IS much easier than hacking the hardware. Hardware needs to be accessed physically; to hack software you just need to access the weakest component - the HUMAN user.
A central all-powerful AGI/ASI will be as hackable as weak personal AI, if not more. Because there will be more motivation to hack it in the first place.
The reason we are not all nuked to death yet is because those who own nukes know that their OWN nuking would make life worse for THEMSELVES. Not only because of the „chess game remis“ we are told about again and again.
turnip_burrito t1_j3j62zz wrote
I'll need time to consider what you've said.
Viewing a single comment thread. View all comments