Comments

You must log in or register to comment.

UniversalMomentum t1_ja7smoh wrote

If we make enough AI's then at least one will appreciate humans. One question is how many AI's will we actually make. I think most of you see AI as mass proliferating. I don't. I think real AI will be far and few between and not even as useful as just plain old machine learning and robots capable of doing the physical part.

It's really the automation of labor we need, not a brilliant AI to tell us how dumb we are. Knowing things is great, but that doesn't get the actual labor done and humans are mostly not in a position of low innovation. If anything our innovation might be killing us. It's really endless cheap labor we need much more than self aware AI.

So one question is how many profitable uses will many competing AI's really have. As a consumer I'm MUCH more interested in like Rosie The robot level tech with no need for AI. A don't mind fake AI like Google, Siri and ChatGPT does to interact with humans more fluidly, but if AI is a live we can't actually put it into lots of devices.

One scenarios that might be common with AI is that you develop it, it shows some promise and then it devolved into insanity.

There is too much assumption here that AI will be super beneficial soon just because we are making some progress. Often it's the last 10% of any project that takes 90% of the work and time and we aren't 90% of the way to AI yet I'd say.

That all being said AI is artificially evolved. This artificial evolution process will create ALL KIND of different AI types and personalities and we will mostly not know what we are creating before hand because we are using digital evolution and not custom making every part of the AI.

2

Tnuvu t1_ja7uw0f wrote

A.I. is coded in our likeness and mentality and trained on the internet which is the pinacle of "our intellect" with all it's downfalls.

Given humanity cannot appreciate humanity, how can we expect our "child" to do what we cannot.

This is why Mo Gawdat mentioned we should make sure A.I. sees also the good in us, before it's too late.

2

peadith t1_ja88zi4 wrote

Of course AI could appreciate humans, and probably correctly, which will probably blow a hole in our persistent, rotten core. Fun times!

2

Lirdon t1_ja71rfg wrote

You’re assuming that the AI would possess a kind of consciousness that is recognizable for you. That is absolutely will not be the case, unless it is specifically designed to mimic human psyche, or by some improbable miracle it just spontaneously develops it, maybe through the process of deep learning.

But excluding those possibilities, it is very likely the AI intelligence will be nothing that is recognizable to us. It might not interact at all with anything visual, it can possibly be purely process based with no understanding of the physical world at all, where everything it can interact with are software modules.

I personally don’t think that AI gaining consciousness will be an automatic threat to our survival — depending on its role, authority, connectivity and function. It may never develop self preservation imperative, where it will try to identify threats to itself, or an imperative to optimize its surroundings, where we might be a nuisance for.

In any case, it won’t likely be like us, able to love or care for us.

1

superjudgebunny t1_ja7cnlb wrote

My issue is, how would one replicate the endocrine system? The area where emotions come from. Imo that would be the hardest thing to do. “feeling” is hard to conceptualize

1

Lirdon t1_ja7epok wrote

Yeah, I don’t think there can be an AI with emotions like us. The whole assumption that it might like us and care for us. There are whole pathways in the brain that get stimulated by endocrine systems electrochemically that just don’t doesn’t exist in an electronic system.

I again, don’t think that AI consciousness will be even recognizable for us. We just don’t know how it would look and behave.

It might never develop some more organic tendencies. Why would it ever decide it needs to perpetuate itself, keep itself alive?

1

superjudgebunny t1_ja7g32n wrote

Im curious as well. I could see sometime down the road, the Star Trek idea. Positronic brain, though with technology we have. I would think more of a quantum brain but that’s so far away it’s laughable. So with what we can do, I’m extremely curious as to what would become.

We assume it will have a motive, why? Our drive is organic, the need to further the species. What does a mind without ANY emotion need or want?

I’m not sure we can even comprehend what the singularity will be like. I feel like we are very close. Often wonder if we will even know when it happens. It’s a confusing idea personally.

1

UniversalMomentum t1_ja7t8m1 wrote

If we program human emotions into a big dataset and keep crunching the algorithums the result should be something that mimics humans emotions so well you can't tell the difference.

We can argue if it really FEELS or not, but from our perspective it should be able to easily mimic all human behavior convincingly. Humans are not THAT complex, rather we tend to all act very similar, so we won't be that hard to mimic.

1

UniversalMomentum t1_ja7t1ov wrote

The same way you do everything with machine learning. You provide it with a ridiculously large dataset to build a suitable algorithm from. You don't have to understand every aspect of something because your using evolution, not hand crafting every piece of code. It's just machine learning digital evolution instead of good old biological evolution.

1

superjudgebunny t1_ja7up3e wrote

:/ it’s not that simple. We still use the old factory sense. This is hard to represent digitally and does not translate well logically. How do you program drive? Also called will, the will to live. Pain? These things are all very interconnected, bio signals aren’t simple.

Where do you start? How would you give AI the idea of empathy? You still have to provide an input. WHAT would that be? This is the hard question.

Musk has hinted towards this with the brain implant. We would need an interface that can translate these things. Then your just imprinting the human response, might as well build an organic computer….

The reason it’s so hard is the same reason describing human emotions are difficult. What is love without using love as a description? What’s love vs infatuation? Philosophically speaking, we can not easily do this.

1

UniversalMomentum t1_ja7suyc wrote

AI will be evolved through machine learning cycles too, not just hand made, it will have components and features that were not designed for at all. I don't think we will have much certainly about what we will be creating at first and then we will still lock short and long term control over the outcome of this artificial evolution.

More than we are hand crafting digital life, we are evolving digital life, which means a lot of it is still kind of out of our direct control and understanding.

1

Lirdon t1_ja7t4hc wrote

Exactly my point, we are likely not be able to recognize the consciousness, it would be different than anything we understand.

0

KeaboUltra t1_ja829kd wrote

Yes. If it could think independently, there are multiple outcomes in which it would appreciate its creator just as there are ways that it would not, or outright hate. Its affection probably wouldn't be recognizable as it's a machine with a completely different perception and cognitive ability. but that doesn't mean It couldn't find a way to communicate that to you. It's an AGI, modeled to human likeness. the thing would be smart or make itself smart. Analyze how humans behave, human speech, and emotion, to learn how and try to convey that to you the same way we try to do that with animals. People sometimes pretend to act like an animal based off what we learned about them, or learn what an animal likes so that they can express it.

​

>We keep sheep for what they provide for us and, moreover, we exterminate bugs that we find disgusting.

It's not as black and white, you can say the same thing about cows, pigs, chicken or any other animal eaten or used for its byproducts. Not everyone treats pet animals well. If someone saw a random sheep in a farm, they would probably pet it and treat it nicely. bugs are the same, we exterminate them because they are pests that destroy your home or get into your food, yet people keep or admire all sorts of insects like butterflies, caterpillars, beetles, ants, etc. It's all really a dice roll whether AI is kind, mean, indifferent, or just like us.

1

MpVpRb t1_ja951as wrote

AI is software, doing math, adding and multiplying really fast, that's it, it's just numbers

1

Shadowkiller00 t1_ja71ydj wrote

Just remember that sentience does not immediately necessitate appreciation of art and beauty. Those come after other needs are taken care of. Think babies and toddlers and how they perceive the world. Think hierarchy of needs. An early AI will spend most of its early days just trying to comprehend the world.

0

UniversalMomentum t1_ja7tq4t wrote

We don't know how sentience works really. We don't know what animals are thinking. We can barely tell what humans are thinking most of the time!

AI is a process of digital evolution, not hand crafting all the code, so you kind of get what you get. You COULD get an AI that appreciates art but sucks at communication. You could get an AI that just wants to stare at the wall and lick doorknobs. You could get an AI that always invents a new way to get stuck in loops.

It's kind of like throwing a bunch of chemicals into a soup to make life, don't expect to know what you will get once we really get to the point of sentience. Right now I think we are no where near that point and the progress of AI might slow down so much it's not a big deal. We may make great progress in the first 90% and find real sentience is vastly more complex than we thought, we really have no idea at this point. We certainly don't even understand how out own brain produce sentience or even how to define it well, so LOTS of unknowns there.

0

Shadowkiller00 t1_ja83ceh wrote

You have one clear data point on sentience. Your own. When you first became cognitively aware, did you care about art?

We assume life will be carbon based because we are carbon based and we don't have any other data points for other types of life. If you are going to speculate on sentience, you must use what you know as that is the only good data point you have. Since the only creatures we know of with sentience are humans, you must start the conversation there. Any other conversation has no basis in reality and is just speculation without foundation

1