Viewing a single comment thread. View all comments

Skarr87 t1_ixe6ouh wrote

In my experience children tend to be little psychopaths. Right and wrong (morality) likely evolved along with humans as they developed societies. Societies give a significant boost to the survival and propagation of members within the society. So societies with moral systems that are conducive to larger and more efficient societies tend to propagate better as well. These moral systems then get passed on as the society propagates and any society that has morals not conducive to societies tend to die off.

Why do you believe an AI would definitely be incapable of empathy? Not all humans are even capable of empathy and empathy can even be lost by damage to the frontal lobe. For some of those that have lost it never returns and for others they are able to relearn to express it. If it was relearned does it mean they are just emulating it and not actually experiencing it? How would that be different than an AI?

When humans get intuition, a feeling, or a hunch it isn’t out of nowhere, they typically have some kind of history or experience with the subject. For example when a detective has a hunch about a suspect lying it could be from previous experience or even a bias from a correlation with behavior of previous lying subjects that other detectives haven’t really noticed. How fundamentally is this any different when an AI makes an odd correlation between data using statistics? You could argue that what an AI is doing when correlating data like this it is creating a hunch and when a human has a hunch they are just making a conclusion using correlated data.

Note I am not advocating using AI in policing, I believe that is a terrible idea that can and will be very easily abused.

3

d4em t1_ixe8sn6 wrote

Our moral systems probably got more refined as society grew, but by our very nature as live beings we need to have an understanding between right and wrong to inform our actions. A computer doesn't have this understanding, it just follows the instructions its given, always.

I'm not making the argument that machines are incapable of empathy, although I am by extension, but the core of the argument is that machines are incapable of experience. Sure, you could train a computer to spit out a socially acceptable moral answer, but there would be nothing making that answer inherently moral to the computer.

I agree that little children are often psychopaths, but they're not incapable of experience. They have likes, dislikes. A computer does not care about anything, it just does as it's told.

The fundamental difference between a human hunch and the odd correlation the AI makes is that the correlation does not mean anything to the computer, it's just moving data like it was built to do. It's a machine.

2

Skarr87 t1_ixekpu2 wrote

So if I am understanding you’re argument, and correct me if I am wrong, the critical difference between a human and a computer is that a computer isn’t capable of sentience and by extension sapience or even more generalized consciousness?

If that is the argument then my take is I’m not sure we can say that yet. We don’t have a great understanding of consciousness yet to be able to say that it is impossible for none organic things to possess. All we know for sure is that it seems that the consciousness can be suppressed or damaged from changing or stopped biological processes within the brain. I am not aware of a reason a machine, in principle, could not simulate those processes to same effect (consciousness).

Anyway, it seems to me that your main problem with using AI for policing is that it would be mechanically precise in its application without understanding the intricacies of why crime may be happening here? For example maybe it will come to the conclusion that African American communities are crime centers without understanding that the reason they are crime centers is because they tend to be poverty stricken which is the real cause. So it’s input may end up being almost a self fulfilling prophecy?

2

d4em t1_ixetoqs wrote

I'm not talking about sentience, sapience, or consciousness, or anything like that, I'm talking about experience. All computers are self-aware, their code includes references to self. I would say machine learning constitutes a basic level of intelligence. What they cannot do, is experience.

It's actually very interesting that you say we don't have a good enough understanding of consciousness yet. The thing about consciousness is that it's not a concrete term. It's not a defined logical principle. In considering what consciousness is, we cannot just do empirical research (it's very likely consciousness cannot be empirically proven), we have to make our own definition, we have to make a choice. A computer would be entirely incapable of doing so. The best it would be able to do is measure how the term is used and derive something based off that. And those calculations could get extremely complicated and produce results we wouldn't have come up with. But it wouldn't be able to form a genuine understanding of what "consciousness" entails.

This goes for art too, computers might be able to spit out images and measure which ones humans think is beautiful and use that data to create a "beautiful" image, but there would be nothing in that computer experiencing the image. It's just following instructions.

There's a thought problem called the Chinese Room. In it, you have a man, placed in a room, that does not speak a word of Chinese. When you want your English letter translated to Chinese, you slide it through a slit in the wall. The man then goes to work and looks up all possible information related to your letter in a bunch of dictionaries and grammar guides. He's extremely fast and accurate. Within a minute you get a perfect translation of your letter spit out the slit in the wall. The question is: does the man in the room know Chinese?

For a more accurate comparison: the man does not know English either, he looks that up in a dictionary as well. It's also not a man, it's a piece of machinery, that finds the instructions on how to look at your letter and how to hand it back to you in another dictionary. Every time you hand him a letter, the computer has to look in the dictionary to find out what a "letter" is and what you should do with it.

As for the problems with using AI or other computer-based solutions in government, yeah, pretty much. The real risk is that most police personnel isn't technically or mathematically inclined, and humans have shown a tendency to blindly trust what the computer or the model tells them. But also, if there was a flaw in one of the dictionaries, it would be flawlessly copied over into every letter. And we're using AI to solve difficult problems that we might not be able to doublecheck.

2

Skarr87 t1_ixhrn5o wrote

I guess I’m confused by what you mean by experience. Do you mean something like sensations? Something like the ability to experience the sensation of the color red or emotional sensations like love as opposed to just detecting light and recognizing it as red light and emulating the appropriate responses that would correspond to the expression of love?

With your example of the man translating words, I’m not 100% sure that is not an accurate analogy of how humans process information. I know it’s supposed to be an example to contrast human knowledge with machine knowledge, but it seems pretty damn close to how humans process stuff. There are cases where people have had brain injuries where they essentially lose access to parts of their brain that process language. They will straight up lose the ability to understand, speak, read, and write a language they were previously fluent in, the information just isn’t there anymore. It would be akin to the man losing access to his database. So then the question becomes does a human even “know” a language or do they just have what is essentially a relational database to reference?

Regardless though, none of this matters in whether we should use AI for crime. Both of our arguments essentially make the same case albeit from different directions, AI can easily give false interpretations of data and should not be solely used to determine policing policy.

1