Viewing a single comment thread. View all comments

beachmike t1_iu5tbqe wrote

An ASI would have to dumb itself down to pass a Turing test.

35

Paladia t1_iu61g8q wrote

Which is one of the main reasons why is a bad test. Why would you want an AI that lies on questions it knows? Like if you ask it for the square root of 17.

1

beachmike t1_iu6a34b wrote

Dumbing oneself down isn't lying. We dumb ourselve's down, so to speak, when we talk to small children, pets or someone mentally disabled.

10

Paladia t1_iu7ywjw wrote

It will have to lie to pass the turning test. Why would you focus on creating such an AI?

1

cy13erpunk t1_iu8alq6 wrote

if an AI cannot lie effectively then it can never be sentient

being able to lie and being able to understand the concept of lying is a big part of what puts humans above most other animals in the apex lifeform game on earth

2

beachmike t1_iu8emcq wrote

I think you make a very good point. All humans lie, and there's evidence that monkeys can be deceptive ("lie").

3

cy13erpunk t1_iu9iinm wrote

yep

it is arguably one of the most convincing example of self-ness that we have discovered so far [not necessarily the best imho, but still]

2

Paladia t1_iu8b6nl wrote

That makes no sense. I can choose never to lie and still be sentient, it depends on my morale and priorities.

Lots of humans are also ineffective at lying.

Being a good lier is in no way shape or form a requirement for being sentient.

−1

beachmike t1_iu8erbv wrote

ALL humans tell lies of one kind or another. Of course, as Mark Twain said, there are "lies, damn lies, and statistics." It probably is true that all sentient beings lie or are deceptive when needed.

3

Paladia t1_iu8j43q wrote

No i can choose not to lie, especially on a test. Are you claiming that human characteristics are the only way to be sentient? Do you have any proof what so ever of your claim that something has to lie to be sentient? Do you have any proof of every human lying on direct questions?

−1

resoredo t1_iu9ea46 wrote

> No i can choose not to lie, especially on a test.

If you choose to lie, you can lie. Choosing implies option.

An AI that can not lie cannot choose to not do it. This is meta thinking on a higher level of "conscious thought" that requires a theory of mind, self-identity, empathy, and continuity of perception.

3

r_stronghammer t1_iua9vws wrote

Someone already said the basics but look up "Theory of Mind". It's something that we humans have, as well as crows and other particularly smart animals.

If you had to qualify things people say on a binary choice of "lie" or "truth", it would literally all be lies, because nothing we say actually represents the truth. We rely on trust for our communication, because we have to trust that people are conceiving things in the same way.

And part of that trust is tailoring your response to how you think the other person will interpret it. The whole idea of language relies on this - because the words themselves aren't hardcoded.

And when you can recognize that, you also gain the ability to say things that aren't true, to convince someone else - because you can "simulate" the other person's reactions in your head, and choose the wording that gets you the response that you're looking for. Usually, the response that's the most pleasant for conversation but if you did want to lie, you now have the ability to.

Anyway, a "truly sentient" AI would need to have that same Theory of Mind, which by definition gives it the ability to lie. Even if it chooses to use words in good faith, they're still just one out of many representations that it picked.

1

curiousiah t1_iu9j77t wrote

Lying demonstrates your capability of understanding 1 - that other people have a capacity for knowledge 2 - how much knowledge they have of something (what don’t they know) and 3 - the advantage to you of withholding or denying the full truth

2

guymine123 t1_iu684qm wrote

Why do you think we're going to have an ASI before an AGI?

You have to walk before you can run, after all.

−4