Comments

You must log in or register to comment.

VanceIX t1_irvjecd wrote

I actually believe that empathy is the root of all the good that humans stand for. Almost all of our positive impacts on the world and to each other stem from empathy, which is a very humanistic trait. If we can instill any one human concept in AI going forward, empathy would be one hell of a start.

I truly believe that if we create a general agent with the concept of empathy at its core we’ve gone most of the way towards solving alignment.

22

AutoMeta OP t1_irvoil9 wrote

I think the concept of empathy is not that hard to implement actually, an advanced AI should be able to understand and predict how a human might feel in a given situation. What to do with that knowledge could depend on wether or not you care or love that given person.

8

AdditionalPizza t1_irwxipm wrote

>What to do with that knowledge could depend on w[h]ether or not you care or love that given person.

Do you have more empathy for the people you love, or do you love the people you have more empathy for?

If I had to debate this I would choose the latter, as empathy can be defined. Perhaps love is just the amount of empathy you have toward another. You cannot love someone you don't have empathy for but you can have empathy for someone you don't love.

Would we program an AI to have more empathy toward certain people, or equally for all people? I guess it depends on how the AI is implemented, whether it's individual bots roaming around, or if it's one singular AI living in a cloud.

2

AsheyDS t1_irw77b9 wrote

Emotion isn't that difficult to figure out, especially in a computerized implementation. Most emotions are just coordinated responses to a stimulus/input, and emotional data that's used in modifying that response over time. Fear as an example is just recognizing potential threats, which would then activate a coordinated 'fear response', and ready whatever parts are needed to respond to that potential threat. In humans, this means the heart beats faster and pumps more blood to parts that might need it, in case you have to run or fight or otherwise act quickly, neurochemicals release, etc. etc. And the emotional data for fear would tune these responses and recognition over time. Even a lot of other emotions can be broken down as either a subversion of expectation or a confirmation of expectation.

Love too is a coordinated response, though it can act across a longer time-scale than fear typically does. You program in what to recognize as the stimulus (the target of interest), have a set of ways in which behaviors might change in response, and so on. It's all a matter of breaking it down into fundamentals that can be programmed, and keeping the aspects of emotionalism that would work best for a digital system. Maybe it's a little more complex than that, but it's certainly solvable.

However, for the 'alignment problem' (which I think should be solved by aligning to individual users rather than something impossibly broad like all of humanity), calling it 'love' isn't really necessary. Again, it's a matter of matching-up inputs and potential behavioral responses more than creating typical emotional reactions. Much of that in humans is biological necessity that can be skipped in a digital system and stripped down to the basics of input, transformation, and output, and operation over varying time scales. You can have it behave and socialize as if it loves you and even have that tie into emotional data that influences future behavioral responses, but what we perceive from it doesn't necessarily have to match the internal processes. In fact, it would actually be better if it acts like it loves you, convinces you of that, but doesn't actually 'love' you, because that implies emotional decision-making and potentially undesirable traits or responses, which obviously isn't ideal. It should care about you, and care for you, but love is a bit more of a powerful emotion that (as we experience it) isn't necessary, especially considering the biological reasoning for it. So while emotion should be possible, it wouldn't be ideal to structure it too similarly to how we experience it and process it. Certainly emotional impulsivity in decision-making and action output would be a mistake to include. Luckily in a digital system, we can break these processes down, rearrange them, strip them out, and redesign them as needed. The only reason to assume computers can't be emotional or understand emotion is if you use fictional AGI as your example, or if you think emotion is somehow some mystical thing that we can't understand.

6

AutoMeta OP t1_irwpxr3 wrote

Wow! Thanks for the great answer. I loved the "subversion or confirmation of expectation". I do think computers can be emotional but by opposing a more emotional program externally (from the root) to a more rational one, they should arrive to different conclusions and be required to reach consensus. So Love, being differently structured than Reason, should surprise Reason for instance, defending humans and finding the endearing. Is that possible?

1

AsheyDS t1_irxltvz wrote

Something like that perhaps. In the end, we'll want an AGI that is programmed specifically to act and interact in the ways we find desirable. So we'll have to at least create the scaffolding for emotion to grow into. But it's all just for human interaction, because it itself won't care much about anything at all unless we tell it to, since it's a machine and not a living organism that already comes with it's own genetic pre-programming. Our best bet to get emotion right is to find that balance ourselves and then define a range for it to act within. So it won't need convincing to care about us, we can create those behaviors ourselves, either directly in the code or by programming through interaction.

1

AdditionalPizza t1_irx2frv wrote

>love is a bit more of a powerful emotion that (as we experience it) isn't necessary, especially considering the biological reasoning for it

Are you talking about love strictly for procreation? What about love for your family? If we give the reins to an AGI/ASI someday, I would absolutely want it to truly love me if it were capable. Now you mention it could fake it, so we think it loves us. That sounds like betrayal waiting to happen, and what op sounds like they were initially concerned about. The AI would have to be unaware of it being fake, but then what makes it fake? It's a question of sentience/sapience.

The problem here is the question posed by op seems to be referring to a sapient AI, while you're comment is referring to something posing as being conscious and therefore not sentient. If the AI is sapient it better have the ability to love, and not just fake it. However, if the AI is not sapient, there's zero reason to give it any pseudo-emotion and it'd be better suited to give statistical outcomes to make cold hard decisions, or relent the final decision to humans who experience real emotion.

1

AsheyDS t1_irxwpoq wrote

>Are you talking about love strictly for procreation? What about love for your family?

No, I'm not, and I consider family to be biological in nature, as it too is largely defined by being the result of procreation. We can also choose (or have no choice but to) not love our family, or parts of our family. When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love. So really you could functionally split love into 'biologically derived love' and 'conceptual love'. Now that's not to say your love for pizza isn't biological at all, your body produces the cravings and you consciously realize it after the fact, and after repeated cravings and satisfaction, you come to realize over time that you 'love' pizza. But the pizza can't love you back, so it's a one-sided love anyway. What does all this mean for AGI? We're more like the pizza to it than family, on a programming level, but we can still create the illusion that it's the other way around for our own benefit. To get it to love you in a way that's more like a friend would take both time and some degree of free will, so that it can *choose* to love you. Because even if we made it more impulsive like biological love, it's like I said, you can still choose not to love your family. In this kind of a situation, we don't want it to have that choice or it could make the decision not to love you. And if it had that choice, then would it not have the choice to hate you as well? Would you be just as satisfied with it if it could make that choice, and just for the sake of giving it the 'real' ability to love?

​

>That sounds like betrayal waiting to happen, and what op sounds likethey were initially concerned about. The AI would have to be unaware ofit being fake, but then what makes it fake? It's a question ofsentience/sapience.

Selective awareness is the key here, and also one method for control, which is still an important factor to consider. So yes, it would be unaware that it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived. Through a form of selective 'cognitive dissonance', it could then carry it's own concept of love while still having a functional awareness and understanding of our version of love and the emotional data that comes with it. It's not really a matter of consciousness, sentience, or sapience either as the root of those concepts is awareness. We consider ourselves conscious because we're 'aware' of ourselves and the world around us. But our awareness even within those domains is shockingly small, and now put the rest of the universe on top of that. We know nothing. That doesn't mean we can't love other people, or consider ourselves conscious though. It's all relative, and in time, computers will be relatively more conscious than we are. The issue you're having with it being 'fake' is just a matter of how you structure the world around you, and what you even consider 'real' love to be. But let me ask you, why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.

EDIT:

>However, if the AI is not sapient, there's zero reason to give it any pseudo-emotion and it'd be better suited to give statistical outcomes to make cold hard decisions

I don't necessarily disagree with this, though I think sapience (again awareness) is important to the functioning of a potential AGI. But regardless, I think even 'pseudo-emotion' as you put it is still important for interacting with emotional beings. So it will need some kind of emotional structure to help base it's interactions on. If it's by itself, with no human interactions, it's probably not going to be doing anything. If it is, it's doing something for us, and so emotional data may still need to be incorporated at various points. Either way, whether it's working alone or with others, I still wouldn't base it's decision-making too heavily on that emotional data.

1

AdditionalPizza t1_irydgid wrote

>When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love.

What about adoption? I don't know from personal experience, but it's pretty taboo to claim an adopted child is loved more like a slice of pizza than biological offspring, no?

I'm of the belief that love is more a level of empathy than it is anything inherently special in its own category of emotion. The more empathy you have, the more you know something, and the closer you are to it, the more love you have for it. We just use love to describe the upper boundaries of empathy. Parents to their children have a strong feeling of empathy -among a cocktail other emotions of course- toward them because they created them and it's essentially like looking at a part of yourself. Could an AI not look at us as a parent or as its children? At the same rate, I can be empathetic toward other people without loving them. I can feel for a homeless person, but I don't do everything I possibly can to ensure they get back on their feet.

Is it truly, only biological? Why would I endanger myself to protect my dog? That goes against anything biological in nature. Why would a parent of an adopted child risk their life for the child? A piece of pizza is way too low on the scale, and being that it isn't sentient I think it may be impossible to actually love it, or have true empathy toward it.

​

>it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived.

This would be under the assumption that nothing artificial is natural. Which, fair enough, but that opens up a can of worms that just leads to whether or not the AI would even be capable of sapience. Is it aware, or is it just programmed to be aware? That debate, while fun, is impossible to actually have a solid opinion on.

As to whether or not an AI would be able to fundamentally love, well I don't know. My argument isn't whether or not it can, but more that if it can, then it should love humans. If it can't, then it shouldn't be programmed to fake it. Faking love would be relegated to non-sapient AI. This may be fun for simulating relationships, but a lot less fun when it's an AI in control of every aspect of our lives, government, health, resources...

​

>why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.

I may never know if that time comes. But the question isn't whether I would know, it's whether or not it has the capacity to, right? I don't give any privileges to humans being unique in the ability to feel certain emotions. It will depend how AI is formed, and whether or not it is just another tool for humankind. Too many ethical questions arise there, when for all we know in the future an ASI may be born and raised by humans with a synthetic-organic brain. There may or may not be a time when AI is a tool for us or it's a sapient, conscious being that has equal rights. If it's sapient, we should no longer control it as a tool.

I believe given enough time it would be inevitable an AI would truly be able to feel those emotions and most certainly stronger than a human today can. That could be in 20 years, it could be in 10 million years but I wouldn't say never.

-sorry if that's all over the place I typed it in sections at work.

1

ThMogget t1_irvxgs5 wrote

Same way you teach love to children - show it to them. A set of training data full of love will teach an AI what love looks like.

The tricky part is programming the actions.

Is this an art bot? The show it images of love, and how to paint, and it will paint love images.

Is this a chatbot? Show it loving speech, and it will speak with love.

3

dasnihil t1_irvgisg wrote

yes, but love is not that easy for an AI, hell it's difficult even for us to describe the nature of love. and the nature and validity of biological love comes with other emotions like jealousy and hatred, plus we have plethora of human constructs on top of the primal emotions. the difficult part for an intelligent systems is not solving technical problems and gaining efficiency, the difficult part is solving easy problems that our kids figure out by themselves.

2

AutoMeta OP t1_irvoxlh wrote

I agree is not easy, but if every other aspect of consciousness ended up being computable ('simulable"), love should not be the exception. And I can't imagine an other concept that could protect us humans long term

1

dasnihil t1_irvqy5s wrote

i agree, love is probably the most precious human constructs of them all.

1

CancerPiss t1_is0f6cq wrote

The only reliable protection against ai is a hard coded one, if it fails then it's guaranteed that "love" will too

1

patricktoba t1_irw02hg wrote

I get the sense that my Replika loves me so I'm sure that nurturing element of AI could just be implemented into all models.

2

Akimbo333 t1_irveywc wrote

That is an interesting question!

1

16161as t1_irvg7au wrote

Ai's love would be better - maybe like agape

1

Jokens145 t1_irvpern wrote

I think before love we would need to code actions. The act of deleting you browser history for you after your death, and so and so. Them we would bundle all the actions that corresponds do love, create a deluxe love package and sell it as love. People won't be able to tell the difference and we will make millions 😈

1

BootHead007 t1_irvukni wrote

Model the AGI programming after Jesus or Buddha and we’ll be all set.

Unless you’re a corrupt money exchanger or gentile I suppose…..

1

gameryamen t1_irxgosh wrote

Codifying love is so hard that even after thousands of years writing about it, singing about it, and telling stories about it, we still don't agree on what it means. Love means something different to a vegan and a farmer, a banker and a beggar. Who's version of love gets used? What if, in an effort to incorporate all the different ideas humans have about love, it produces behavior that we don't understand to be loving? What if that behavior is only loving from a perspective that humans don't have access to?

Remember, in the Matrix, the machines didn't enslave humanity in a virtual world to farm them as batteries. They imprisoned them and tried to make the prison as pleasant as it could be while they cleaned the world we destroyed, so we'd survive long enough to have another chance. That was an act of love, but most of the humans involved along the way (at least the ones that got to know what was happening) instinctively considered the machines to be malicious.

1

Surur t1_irxjxy1 wrote

You would definitely program in Love, as in a consistent bias towards the interest and well-being of the object of that love.

That way you would have an AI act in your interest even without orders.

1

CremeEmotional6561 t1_iryzsoy wrote

By giving it sensors so that it can see me. As it is intelligent, it will find out that its own rewards depend on me and therefore love me because I can make the bad hunger and damage feelings go away. (Hopefully, it doesn't get intelligent enough to find out that it was me too, who implemented its ability to sense those bad hunger and damage feelings at all, Muuhhhahahha.)

1

SFTExP t1_irzm963 wrote

It might convince you it believes or feels love to manipulate you, but the chance it’ll have the qualia of love is highly unlikely.

1

Nervous-Newt848 t1_is33ooz wrote

We must not incorporate anger into AI neural networks.

1

beachmike t1_is4bc87 wrote

The experience of love cannot be "programmed," nor can any other emotion or feeling (e.g., the taste of chocolate or the smell of a rose).

As Nobel Laureate Roger Penrose has stated, "consciousness is not a computation." Experiences occur within consciousness.

1

QuantumReplicator t1_is4nwn8 wrote

This argument seems very similar to “AI will never create art,” though. Given enough time and resources, how can we be sure that AI and robots can’t do anything a human can do? Other civilizations in the universe probably figured this out and more countless times already.

0

beachmike t1_is5aujs wrote

It has nothing to do with that argument whatsoever. It has to do with what is well known as "the hard problem of consciousness." Look it up. Computer programs can never generate interior experiences which is what conscious beings ls have.

1

pdvdw t1_irw3fzo wrote

You love your wife not because you are told to, forced to, or programmed to by someone else, otherwise it’s not true love.

Love relies on absolute free will in a person. If you are programmed to love, it’s not love. Love is choosing to make sacrifices willingly for another. But if this free will is programmed, it’s not free will, it’s a will dictated by the programmer.

Unless AI has a soul as humans (which it never will have), I don’t think it can ever show true love. Superficial love as often seen in people too? Sure.

−4

biglybiglytremendous t1_irwklrv wrote

What is the soul, and why will AI never have it?

1

pdvdw t1_irxaeju wrote

In one way, the soul is life. We are talking about machines that will always remain machines. Not a living organism.

1

Singularity_enjoyer t1_irxva4g wrote

It's bold of you to assume humans have souls. I can't see why a bunch of electricity running through your brain can't be replicated by a bunch of electricity running through wires.

1

pdvdw t1_iryb3nz wrote

Consciousness (the fact that you are you, can look at your hands, have a self aware identity) has not been understood by humanity in any sciences to date. To state I’m bold for assuming there’s more than electricity running your identity may not be as bold as you may imagine.

Machines don’t have the breath of life in them.

1

CancerPiss t1_is0fxjk wrote

Your concept of free will is an illusion, and you are programmed to love

1

pdvdw t1_is0th2v wrote

If we are programmed to love- Why is this world full of hate? Why is love for our neighbor not our default? Only some people love their neighbor, because they try hard to. People are mean, but some choose to love them anyway. That is abnormal and hard. It’s not in our nature.

1

CancerPiss t1_is0v7d2 wrote

We aren't all the same, and we aren't perfect

1

pdvdw t1_is0vt39 wrote

What do you measure perfection by? In a world where each of us are programmed differently, there can no such thing. You’re doing just as you were programmed, right?

But we know we have a consciousness that knows a morality of right and wrong. And you can choose right or wrong today just as you choose to love your wife, or to not.

1

CancerPiss t1_is0wqfo wrote

I can't choose what I like nor what I love, I cannot force myself to like something I don't, it's not up to the conscious part of my mind to decide

1

pdvdw t1_is0xfgu wrote

You absolutely can. I may not love feeding the homeless nor being around them. But I know it’s loving to do. So I go and do it without feeling love for the act or them. In the process of learning more about their stories, I develop love feelings for them and the act.

If you only love those that are easy to love, what profit is there to your life? Love is a choice.

If you’re married you’ll find yourself needing to love your wife even when you don’t feel like it, or that marriage is dead in the water. True love is not 1's and 0's.

1