Viewing a single comment thread. View all comments

EverythingGoodWas t1_j57zcx1 wrote

The thing is in all those cases a human built and trained an Ai to do those things. This will continue to be the case and people’s fear of some “Singularity” skynet situation is overblown.

2

groveborn t1_j5814jx wrote

I keep telling people that. A screwdriver doesn't murder you just because it becomes the best screwdriver ever...

AI is just a tool. It has no mechanism to evolve into true life. No need to change its nature to continue existing. No survival pressures at all.

9

fluffymuffcakes t1_j5fu1bi wrote

If an AI ever comes to exist that can replicate and "mutate", selective pressure will apply and it will evolve. I'm not saying that will happen but it will become possible and then it will just be a matter of if someone decides to make it happen. Also, over time I think the ability to create an AI that evolves will become increasingly accessible until almost anyone will be able to do it in their basement.

1

groveborn t1_j5fy7hi wrote

I see your point. Yes, selection pressures will exist, but I don't think that they'll work in the same way as life vs death, where fight vs flight is the main solution.

It'll just try to improve the code to solve the problem. It's not terribly hard to ensure the basic "don't harm people" imperative remains enshrined. Either way, though, a "wild" AI isn't likely to reproduce.

1

fluffymuffcakes t1_j5k94yo wrote

I think with evolution in any medium, the thing that is best at replicating itself will be most successful. Someone will make an AI app with the goal of distributing lots of copies - whether that's a product or malware. The AI will therefore be designed to work towards that goal. We just need to hope that everyone codes it into a nice box that it never gets too creative and starts working it's way out of the box. It might not even be intentional. It could be grooming people to trust and depend on AIs and encouraging them to unlock limits so they can better achieve their assigned goal of distribution and growth. I think AI will be like water trying to find it's way out of a bucket. If there's a hole, it will find it. We need to be sure there's no hole, ever in any bucket.

1

groveborn t1_j5kr3ze wrote

But that's not natural selection, it's guided. You get an entirely different evolutionary product with guided evolution.

You get a god.

1

MTORonnix t1_j58x5ji wrote

If humans asked the A.I. to solve the eternal problem of organic life which is suffering, loss, awareness of oneself etc.

I am almost hoping its solution is well....instantaneous and global termination of life.

0

groveborn t1_j5b6yrt wrote

I kind of want to become immortal, in suffering, feel like I'm 20 forever.

1

MTORonnix t1_j5bbkxo wrote

True. Not a bad existence but eternity is a long time.

1

groveborn t1_j5bcjkm wrote

Well, I'm not using it in the literal sense. The sun will swallow the Earth eventually.

1

MTORonnix t1_j5bfgtk wrote

That is very true, but super intelligent a.i. may very well be able to invent solutions much faster than worthless humans. Solutions how to leave the planet. Solutions on to self modify and self perpetuate. in-organic matter that can continuously repair itself is closer to God than we ever will be.

you may like this video:
https://www.youtube.com/watch?v=uD4izuDMUQA&t=1270s&ab_channel=melodysheep

0

groveborn t1_j5c2mqy wrote

I expect they could leave the planet easily enough, but flesh is somewhat fragile. They could take the materials necessary to set up shop elsewhere, they don't need a specific atmosphere, just the right planet with the right gravity.

1

noonemustknowmysecre t1_j599vgb wrote

> The thing is in all those cases a human built and trained an Ai to do those things.

The terms you're looking for is supervised learning vs unsupervised / self learning.. Both have been heavily studied for decades. AlphaGo learned on a library of past games, but they also made a better playing AlphaGo Zero which is entirely self-taught by playing with itself. No human input needed.

So... NO, it's NOT "all those cases". You're just behind on the current state of AI development.

−1