Viewing a single comment thread. View all comments

AndromedaAnimated t1_j3izufg wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

  1. „Humans not being able to augment themselves“ => are you aware that people with money already augment themselves? They live longer and healthier lives, they have better access to education…

  2. „bad humans“ => who decides which humans are bad and which are good?

  3. „morals not allowed to change“ => you still want to be stoned for having extramarital sex?

  4. „central AI less prone to be hacked“ => do you know how hacking works?

1

turnip_burrito t1_j3j1e7k wrote

  1. Yes, but I mean more dramatic augmentation. Adding an extra five brains. Increasing your computational speed by a factor of 10. Adding more arms, more attention, etc. And indeed you are right people can do that, but it is extremely limited compared to how software can augment itself.

  2. Everyone has a different opinion, but most would say people who steal from others for greed, or people who kill, are bad people. These people are the ones who stand to gain a competitive advantage early on through exponential growth of resources if they use their personal AGI correctly.

  3. Unchanging morals have to be somewhat vague things like "balance this: maximize individual freedom and choice, minimize harm to people, err on the side of freedom vs security, and use feedback from people to improve specific implementations of this idea", not silly things like "stone people for adultery".

  4. It is less prone to be hacked. If you read my post, you would see that it loses the hardware vulnerabilities and now only has software vulnerabilities. It may be possible for an AGI to make itself remotely unhackable by any human person, or even in principle. It may also be impossible to hack the AGI if its substrate doesn't run computer code, but operates in a different way than the way we know it today.

1

AndromedaAnimated t1_j3j57yv wrote

What I see in you is that you are a good person. This is not in question. This is actually the very reason why I am trying to convince someone like you - someone talented with words and with a strong inner moral code, who could use their voice to reach the masses.

Where I see the danger is that the very ones whom you see as „evil“ can - and already do - brainwash talents like you to step in on THEIR cause. That’s why I am contradicting you so vehemently.

While I see reason in your answers, there is a long way to go to ensure that this reasoning also gets heard properly. For this, we need to not appeal to fear but to morals (=> your argument about ensuring that developers and owners should be ethical thinkers is very good here). It would be easier to reach truth by approximation, deploying AGI to multiple people and seeing the moral reasoning evolve naturally. Concentration of power is too dangerous imo.

Hacking is now already done by „soft“ approach mostly, that’s why I mentioned it. Phishing is much easier and requires less resources than brute force today. Just lead them on, promise them some wireheading, and they go scanning the QR codes…

Hacking the software IS much easier than hacking the hardware. Hardware needs to be accessed physically; to hack software you just need to access the weakest component - the HUMAN user.

A central all-powerful AGI/ASI will be as hackable as weak personal AI, if not more. Because there will be more motivation to hack it in the first place.

The reason we are not all nuked to death yet is because those who own nukes know that their OWN nuking would make life worse for THEMSELVES. Not only because of the „chess game remis“ we are told about again and again.

1