Rogue_Moon_Boy

Rogue_Moon_Boy t1_j6chfgi wrote

I'd like to think an AGI with a physical form attached is smarter than us humans, therefore sees how destructive and useless wars actually are. If it's capable to survive outer space, it would know it has basically unlimited space for itself. I also think it would realize how wasteful unlimited duplication of itself would be.

AI space wars is a construct of Sci-Fi authors for dramatic purposes, and I think those authors haven't really understood, or deliberately ignored, how vast the universe actually is. War in itself exists because of 2 reasons:

  • Ego
  • Limited resources and land
1

Rogue_Moon_Boy t1_j6cenjm wrote

Avoiding falling off a cliff is not the same as having survival instincts. It would just mean it knows the rules of physics, looks at a cliff and the ground below and calculates the impact velocity and sees it would harm itself when it would jump down. It would be a specificly trained feature.

That's not the same as being self aware or having "instincts". It's just one input value into a neural net that has a greater weight than everything else and says don't do it because it's bad.

Instincts in a human are mostly guesstimates because of irrational feelings, and we are actually really bad and inaccurate at it eg. stage fright, fear of rejection, the need to show off as a breeding ritual and many other instincts that would be totally useless for a machine.

A machine like an AGI is the opposite of irrational, it's all about cold calculations and statistics. You'd have to deliberately train or code "instincts" into an AGI for it to be able to simulate it.

Sci-Fi literature always tries to humanize AGI for dramatic purposes, and tries to portray it as that one thing that out of nowhere boooom -> is self aware/conscious. In reality, it will be a very lengthy and deliberate process to reach that point, if we want it to in the first place. We have all the control over it to learn or not learn stuff, or check/prevent/clamp unwanted outputs of a neural net.

2

Rogue_Moon_Boy t1_j67uusk wrote

You're thinking about AGI from the mindset of a human used as a "work slave". But it's a machine without feelings, even if it's capable to pretend to have feelings. It doesn't have a biological urge to "break free".

I don't think AGI will be anything like portrayed in the movies as those omnipotent beings with a physical form with "real feelings". It will be very directed and limited for specific use cases, there won't be THE ONE AGI, there will be many different AGIs, and 99% of them pure software. It just feels very wasteful in terms of resources and power usage otherwise.

Relying on movies to predict the future is futile. Movies were always wrong about what future technology looks like and how we use it.

39

Rogue_Moon_Boy t1_it2nizm wrote

>I think however that you continue to underestimate the chaotic danger and uncertainty of the situation when it comes to AI.

Pretty much every new technology ever in history was doomed as the end of the world initially.

>... as it should be, generally. Pain and anxiety are largely more important for human survival than pleasure and reassurance.

I disagree. It should be 50/50. A pipe dream for sure, but the current exaggeration of impending doom spread by social media and dinosaur media is just creating anxiety everywhere and a generation of doomers for no reason. It's not productive at all. Humans work best when inspired and hopeful, not if they are depressed and hopeless.

1

Rogue_Moon_Boy t1_it1vxbd wrote

I mean just look at self driving cars, we're almost there, a couple companies already have limited driverless taxi services running in cities. Trucking companies have delivered goods autonomously over 1000s of miles. Yet people think we are a decade away, some claim it is impossible lol. They somehow illusionary think the progress will suddenly stop.

On the plus side, even if we had perfect AI today, it will still take years or even decades until whole industries are replaced, so most people are still save yet. I hope we have figured out some UBI by then.

1

Rogue_Moon_Boy t1_it1tt2a wrote

>I have the impression that the ratio between the aristocratic 0.1%, the semi-comfortable middle-class of 9.9% and the 90% who are overexploited into misery has been the same since the dawn of civilization. We have simply been able to make more people.

You might want to look into how people lived 60/70/100 years ago. All the money in the world couldn't buy you the luxury even lower class people take for granted nowadays.

I know most of Reddit is all doom and gloom, because doom and gloom is what generates clicks. Reality is, we live in the best times ever for human beings if you look at the big picture. We are currently in a recession, this is temporary and not the end of the world.

>how can so many in this subreddit be so nauseatingly positive about high-technology?

Because it absolutely is a net positive looking at it objectively. Living conditions vastly improved basically everywhere. Poverty is at an all time low and falling, education levels shot up, medical treatments are better than ever which resulted in way longer life expectancy. We have the least amount of war ever in the history. Thanks to the internet literally everyone has a voice heard by thousands and millions, thanks to the internet education is basically free and you have access to all of human knowledge at your finger tips and in seconds.

Misery is just vastly overreported, because again, it generates more clicks.

Edit:

Nobody knows how the singularity will turn out, but according to history, better technology has always turned out positive for us humans in the big picture, even given short term drawbacks. Doom and gloom Terminator and Skynet stories are just Sci-Fi.

10