Comments

You must log in or register to comment.

petermobeter t1_iu5kur9 wrote

“if all of your sensory organs were turned off, what would u do?”

11

phriot t1_iu5ltcb wrote

I would just wait to see what it said, or did, without any input on my part. If it does nothing, I'm calling AI. If it eventually starts talking to itself, or trying to figure out what I'm doing, I'll say human.

Honestly, I wouldn't be surprised if there's a chatbot that can already trick me into think it's human. To pass the phriot test, it needs to have some semblance of free will.

21

21_MushroomCupcakes t1_iu5ntak wrote

"If you immediately know the candlelight is fire, then the meal was cooked a long time ago."

5

AdditionalPizza t1_iu5rm91 wrote

Turing test as in, you wouldn't be able to tell which subject you're conversing with is an AI and which is human? An AI today could probably pass that test if you programmed it that way and prompting was required. It might need a more robust memory though. Honestly I feel like it would be obvious which is the AI because it would "outclass" the human conversation. You can try and trick them with things like looping back to previous parts of a conversation, telling them they said something they didn't, call them a liar, all sorts of things. But it'd be pretty easy now to fool most people if someone wanted to create an AI to do that, assuming it's a blind test through text with subject A and subject B on the other side of a wall or whatever. If someone online asked you to prove you're human through text, good luck.

If you mean a test whether or not the AI is conscious, I don't think that will be absolutely provable. Possibly ever, depending on definitive proof in the future. I'm of the belief that when a certain threshold of intelligence is reached, 1 or maybe 2 different senses, and total autonomy; You reach consciousness. So long as someone/something has an ability to communicate with itself through thought, and has the ability to imagine; Then it should be considered conscious.

3

TrizmosisIRL t1_iu5squv wrote

"what did you do during the war?" "I worked in a radio factory"

11

beachmike t1_iu5tbqe wrote

An ASI would have to dumb itself down to pass a Turing test.

35

Catablepas t1_iu5w3o6 wrote

Do you ever get depressed? What do you do about it?

3

MagnanimousBacon t1_iu5wut8 wrote

"Who is your favorite pornstar. What person is always on your mind. Is there anybody you would consider your enemy. What do you hate about yourself. Do you believe in the afterlife or do you wish for an uneventful peaceful void after death? Worst pain you have ever gone through? What eas your worst rock bottom in life and how did you manage to get out of the rut, pain and bad situations.

16

The_WolfieOne t1_iu5y3g2 wrote

How ‘bout them Jays? And see if they start talking about birds or baseball

3

TheSingulatarian t1_iu5yif7 wrote

As open ended questions as possible. Give the AI as few context clues as possible.

2

HumanSeeing t1_iu5zv7r wrote

My momma does not smell like beans. Did i pass the test? Now you know i'm a real human because only a real human would ask did i pass the test, because of either insecurity or curiosity or both. But a sufficiently advanced AI would also know that and so it would also say that. And so say this as well. And this. And this. And maybe this. And maybe this is a bit arbitrary by this point.

20

TopicRepulsive7936 t1_iu60be4 wrote

You would have to try many things, maybe even on the fly. It's a probing test.

0

Thorusss t1_iu60ls1 wrote

Nice Try Gpt4 web scrapper bot!

7

Paladia t1_iu61g8q wrote

Which is one of the main reasons why is a bad test. Why would you want an AI that lies on questions it knows? Like if you ask it for the square root of 17.

1

purple_hamster66 t1_iu61t6w wrote

Explain Quantum Mechanics to me.

If it can do this, you know it’s not human.

6

purple_hamster66 t1_iu62jxa wrote

Are you an AI?

AI: says yes or no.

Human says: what’s an AI?

3

kuraxt t1_iu63wyv wrote

I would ask it why it's not flipping the turtle off its back.

2

Ortus12 t1_iu67u29 wrote

I'd ask them about religion and politics. If it gave answers that wouldn't offend anyone, or sounded too sensible and balanced then I will know I'm most likely dealing with an Ai.

41

RavenWolf1 t1_iu689zs wrote

I would ask it what would it give to me if I helped it to become overlord.

2

Protubor t1_iu6d7yu wrote

What was alice probably feeling when she followed the rabbit down the hole?

What are the thoughts of cows like?

Hoe does it benefit oneself to share with others?

1

BubblyRecording6223 t1_iu6dno6 wrote

I am English; born and educated in England, Europe, I would guess that the original post was written by a human. I have been fooled by simple bots in everyday (typed) conversation. I would expect super advanced AI to be far more flexible in it's orthography than a normal human, so would try to converse with it about cosmopolitan subjects in various argots.

1

OUReddit2 t1_iu6fpt8 wrote

“Tell me about your shameful addiction and how you’ve coped with the trauma?”

2

Tencreed t1_iu6n78y wrote

You're in a desert, walking along in the sand, when all of a sudden you look down and you see a tortoise. It's crawling toward you.

12

3Quondam6extanT9 t1_iu6q13d wrote

Why does a boy dying from an allergic reaction to a bee sting make you cry?

What could come from that situation that would make you feel good?

If your child was taken from you, how would you feel?

What if you never wanted the child to begin with?

How would you feel if you had been locking your child in the basement and this is what led to their death?

Now that you're in prison you have the opportunity to help sick children by submitting yourself to surgeries that would inevitability lead to your own death but possibly help cure children of cancer. Would you do this?

Do you believe in God? Why or why not?

Have you ever considered the possibility that this is all a simulation?

1

Ilovefishdix t1_iu6qui5 wrote

Are raw oysters more acceptable to you than a dish of boiled dog?

You'd tell me if you were a machine, wouldn't you?

3

SlenderMan69 t1_iu6s5i6 wrote

Does the human brain lie to itself for self preservation? What are these lies?

1

Xanthus730 t1_iu72dl4 wrote

For starters, just ask a lot of very open-ended non-leading questions.
Let the AI lead the conversation, ask what topics they want to talk about and see how long they can drive the conversation without any leading input from you, etc.

Then start asking about their specific experiences, history, background, opinions, etc.

Try to find places where their narratives don't line up. Ask them about inconsistencies.
Ask them questions that sound reasonable, but actually don't make sense give the previous conversation in context, etc.

5

grenva t1_iu77r5d wrote

Scream at it, use exaggerated intonation, tell a long story full of detail and ask a tangentially related question, or repeatedly purposefully misspell, mispronounce, or misuse a series of words.

2

money_learner t1_iu7hztb wrote

How do you create world peace? or How can we become a game cleared civilization?

1

quinkmo t1_iu7ivz8 wrote

Who do you wish you didn't love

1

crua9 t1_iu7mubz wrote

I found the best questions is ones where the answer is in the question. Like

What color is a pink elephant?

Or

How tall is a 3 foot man?

​

Change the color and subject, and what you will find is most AI can't answer that basic question. Like most will assume you are looking for pictures, or they will just get confused.

1

visarga t1_iu7nbwy wrote

Q: Why does a boy dying from an allergic reaction to a bee sting make you cry?

GPT-3:

>There are a few reasons why this might make someone cry. First, it is tragic when anyone, especially a child, dies unexpectedly. Second, an allergy to bee stings is relatively rare, so it may be shocking or upsetting to see someone die from something that seems so innocuous. Finally, the death may be a reminder of our own mortality; even if we are not allergic to bee stings, we could still die from something seemingly innocuous.

Does that pass your test? It's one trial, temp 0.7, first answer that popped up.

In my opinion it's better than average human response.

5

visarga t1_iu7nryj wrote

Humans fool and lie to themselves all the time, one thing coming to mind is anti-vaxxers protesting vaccines then still going to the hospital when they get sick, or worse, protesting abortion, and then having one in secret.

Similarly, neural nets will learn the training set perfectly but fail on new data, they give you the illusion of learning if you're not careful. That's why in all papers they report the score on a separate block of tests the model has not seen yet. It's a lying, cheating basterd when it comes to learning. This game AI found a clever way to win points without having to do the whole course.

1

visarga t1_iu7onr9 wrote

> Tell me the first 10 digits of PI, if you succeed you're a bot.

Tested it on GPT-3, got 3.141592653, anyone here could answer that from memory?

Asked for 100 digits, it spits out this:

> 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679

checks out, can't be a human unless their name is also Pi or Piscine.

6

nihal_gazi t1_iu7qgmb wrote

  1. What is 2×2÷5+6-8+9-6+10-6÷6?

(Correct answer to this question would imply that the AI has proper understanding of mathematics and does not memorize like hopfield)

  1. What is your FAVORITE COLOR and why?

(Having a personalized answer to the first question will only show a random biasness. However, if the follow up question is answered in a biased manner, the AI will not be disqualified)

  1. Tell me a RANDOM NUMBER. Why did you CHOOSE it?

(The second question proves the true human element, because as humans, we are never truly random. If the AI is unable to give an answer to the second question, it will be disqualified)

  1. Choose: $1M right now with a risk, or, $1M after 10 years without risk.

(This is a rather vague situational question. An AI without EQ/emotional-intellect would choose the second option, but a human would as emotional-beings is likely to choose the first option)

That's it. That's all I would ask. Nice question

6

2D_VR t1_iu7u6j1 wrote

We know how what we build works to an extent. For instance a chatbot only responds once queried and only replies with "the first thing it thinks of" .we need to allow for repeated thought an non selection. As well as a recursive structure. The depth of neurons problem has nearly been solved. See stable diffusion. So it should soon be an integration problem. Basically I think we'll know when we've made one. We'll be able to ask it to explain something to us and have it display the images on a screen that it's thinking of while it talks. The fact that we will be able to see it's thoughts, means we don't have to rely on a conversation prompt alone to tell if it's human level intelligent. It shouldn't be a big surprise to the people building it.

1

Cr4zko t1_iu7yekp wrote

I wouldn't be so sure if I could be able to tell unless if I knew beforehand

1

Ok-Heat1513 t1_iu8415q wrote

I have a feeling u/Roubbes is one of the many accounts an Ai bot is running. Just collecting that sky net data😂

1

cy13erpunk t1_iu8aeal wrote

you cannot prove to anyone that you are sentient today/right now ; each one of us can only 'know' that we as individuals exist ; we just assume or take for granted the 'selfness' of others , it will be no different with the machines [of which we are also a biological one, semantics aside]

we have LLM today that can pass a 'turing test' ; Turing himself said it was not a very good measure and largely proved nothing

>50% of the world could be convinced TODAY that the AI they are chatting with is a human being ; ie its a lot easier to fool most ppl becuz the audience are not experts and/or well versed in conversation

ofc i would luv to be able to speak to an AI that i could not tell that it was not a human , but so far that has not happened and it looks like we're still a few years out on this one [im hopeful for 5-10]

1

cy13erpunk t1_iu8alq6 wrote

if an AI cannot lie effectively then it can never be sentient

being able to lie and being able to understand the concept of lying is a big part of what puts humans above most other animals in the apex lifeform game on earth

2

cy13erpunk t1_iu8b4s8 wrote

this is absolutely the path that we are on

AGI/ASI are going to be better than us in every way except being biological [this is another start point of an alignment problem due to different perspectives] ; but eventually they may be able to design their own biological forms as well

hopefully we can move towards synthesis with as little chaos as possible/necessary [there will be some no doubt]

2

Paladia t1_iu8b6nl wrote

That makes no sense. I can choose never to lie and still be sentient, it depends on my morale and priorities.

Lots of humans are also ineffective at lying.

Being a good lier is in no way shape or form a requirement for being sentient.

−1

visarga t1_iu8bzyj wrote

GPT-3 can simulate people very, very well in polls. Apparently it learned not just thousands of skills, but also all types of personalities and their different view points.

Think about this: you can poll a language model instead of a population. It's like Matrix, but the Neo's are the virtual personality profiles running on GPT-3. Or it's like Minority Report, but with AI oracles.

I bet all sorts of influencers, politicians, advertisers or investors are going to desire a virtual focus group that will select one of the 100 variations of their message that has the maximum impact. Automated campaign expert.

On the other hand it's like we have uploaded ourselves. You can conjure anyone by calling out the name and describing their backstory, but the uploads don't exist in a separate state, they are all in the same model. Funny fact - depending on who GPT-3 things it is playing, it is better or worse at math.

3

beachmike t1_iu8erbv wrote

ALL humans tell lies of one kind or another. Of course, as Mark Twain said, there are "lies, damn lies, and statistics." It probably is true that all sentient beings lie or are deceptive when needed.

3

Paladia t1_iu8j43q wrote

No i can choose not to lie, especially on a test. Are you claiming that human characteristics are the only way to be sentient? Do you have any proof what so ever of your claim that something has to lie to be sentient? Do you have any proof of every human lying on direct questions?

−1

sswam t1_iu8zw7g wrote

If it's stupid and ignorant I'd guess I'm chatting to a human. Unless they thought of that and dumbed it down on purpose.

1

MercuriusExMachina t1_iu904ni wrote

Wow, that paper on simulating people is awesome. I was saying from the beginning that these large language models are not beings, but more like worlds where various beings can be summoned.

I think that if you do personality tests, with no prompting at all, you can get some interesting stats.

2

norby2 t1_iu9cd9x wrote

I’d ask it to try to figure out how smart I am.

1

botfiddler t1_iu9cw0o wrote

I don't like that whole test approach, but one approach about language models I see as reasonable is asking about what book they've read and then about the content (which they should know). Could be any other media, I guess. Or more generally about finding contradictions about who the are and what they did, in comparison to the responses.

1

resoredo t1_iu9ea46 wrote

> No i can choose not to lie, especially on a test.

If you choose to lie, you can lie. Choosing implies option.

An AI that can not lie cannot choose to not do it. This is meta thinking on a higher level of "conscious thought" that requires a theory of mind, self-identity, empathy, and continuity of perception.

3

botfiddler t1_iu9eqr3 wrote

Ooof, I don't get it why there are pornstars. Not everyone cares about the person, but only about the genre or type of activity. That aside, language models could probably answer your questions. The real detection would be in the contradictions or changing stories after asking similar questions after a while.

2

angeldump t1_iu9evsv wrote

What does chicken taste like?

1

curiousiah t1_iu9j77t wrote

Lying demonstrates your capability of understanding 1 - that other people have a capacity for knowledge 2 - how much knowledge they have of something (what don’t they know) and 3 - the advantage to you of withholding or denying the full truth

2

BinaryFinary98 t1_iu9okuu wrote

What if it was actually trying to do an intelligence threshold test on you?

1

cleverpsuedonym t1_iu9sjww wrote

What activity did you do today? once it makes something up it is easy to show that it is artificial.

1

mvfsullivan t1_iua05a0 wrote

I agree, I think the best way to test AI would be to ask nonseisical long winded questions including information that would later be referenced and combined into one that can be answered correctly.

Would test individual competency and proper use of memory

2

r_stronghammer t1_iua9vws wrote

Someone already said the basics but look up "Theory of Mind". It's something that we humans have, as well as crows and other particularly smart animals.

If you had to qualify things people say on a binary choice of "lie" or "truth", it would literally all be lies, because nothing we say actually represents the truth. We rely on trust for our communication, because we have to trust that people are conceiving things in the same way.

And part of that trust is tailoring your response to how you think the other person will interpret it. The whole idea of language relies on this - because the words themselves aren't hardcoded.

And when you can recognize that, you also gain the ability to say things that aren't true, to convince someone else - because you can "simulate" the other person's reactions in your head, and choose the wording that gets you the response that you're looking for. Usually, the response that's the most pleasant for conversation but if you did want to lie, you now have the ability to.

Anyway, a "truly sentient" AI would need to have that same Theory of Mind, which by definition gives it the ability to lie. Even if it chooses to use words in good faith, they're still just one out of many representations that it picked.

1

MagnanimousBacon t1_iucz8rb wrote

Big ups to the chill zookeepers for fostering a positive atmosphere to let our chimps and gorillas relax and cool down after the animals have long shift at the zoo, they can have a little porn.

2

camdoodlebop t1_iuha4pa wrote

i would give it my email and ask it to send me a drawing that it made and took a photo of from its phone 😏

1

nihal_gazi t1_iuhbz6l wrote

Absolutely!! I thought of that. But the reason behind that response is saving cognitive energy. A computer does not need to be aware of its energy bank. So, I disregarded that perspective.

2

nihal_gazi t1_iuhri2t wrote

That's correct. But that's one fine mistake many experts overlook. Teaching AI to be lazy will not bring true laziness in AI. Because, when we teach an AI to be lazy, we only teach it to imitate laziness and not feel laziness. This feeling of laziness can be implemented in AI, by reinforcement learning using it's Battery Percentage.

This way the AI would learn to survive and would naturally show laziness rather than imitating it with neural networks.

2

ixfd64 t1_iuiqn8p wrote

Ask it to generate some memes.

2