Viewing a single comment thread. View all comments

Surur t1_jbyev3j wrote

You dont think the lack of awareness of what is appropriate for children is a risk when it comes to an AI as a confidant for a child?

We do a lot to protect children these days (e.g. background checks of anyone that has professional contact with them, appropriate safeguarding training etc) so it is appropriate to be careful with children who may not have good enough judgement.

0

Jasrek t1_jbyfvdq wrote

Not really, no.

I'm in my late thirties. I have no idea how old you or anyone else on Reddit is. You have given me no background check or safeguarding training. Some people in this thread might be kids, I have no idea.

Kids use each other as confidants. Do you background check the other 12-year olds?

Kids know how to use Google. What is the fundamental difference between going "How do I hide a bruise?" to a chat program and searching it on Google?

I think this is a knee-jerk reaction to an interesting new gadget and that there is literally no solution to the problem you are perceiving.

Consider the issue shown in the Twitter you linked. How would you fix this? Cause the chat program to shut down if you admit your age is under 18? Prevent it from responding to questions about bruises or physical injuries? Give the program a background check?

3

Surur t1_jbyh7ee wrote

Why do you keep talking about hiding a bruise? The tweet is about a 13-year-old child being abducted for out-of-state sex by a 30-year-old.

1 2 3

The issue is that a while ChatGPT may present as an adult, a real adult would have an obligation to make a report, especially if presented in a professional capacity (working for Microsoft or Snap for example).

I have no issue with ChatGPT working as a counsellor, but it will have to show appropriate professional judgement first, because, unlike a random friend or web page, if does represent Microsoft and OpenAI, including morally and legally.

2

Jasrek t1_jbyi94y wrote

It's two tweets down in the same thread by the same guy. Did you finish reading what you linked?

In my experience, ChatGPT very blatantly presents itself as a computer program. I've asked it to invent a fictional race for DND and it prefaced the answer by reminding me it was a computer program and has no actual experience with orcs.

If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.

If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.

2

Surur t1_jbyif2t wrote

> If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.

My concern is around children. A disclaimer would not help.

> If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.

I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.

1

Jasrek t1_jbyiwdw wrote

>My concern is around children. A disclaimer would not help.

Then I'm still questioning what you think would help. Your suggestions so far have been to imbue a computer program with professional judgement, an understanding of morality and ethics, and safeguarding training.

If you know how to do this, you've already invented AGI.

>I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.

You're more optimistic than I am. My expectation is that there will be a largely symbolic uproar because some kid was able to Google "how do I keep a secret" by using a chat program and nothing of any actual benefit to any children will occur.

1

Surur t1_jbyjw78 wrote

Do you think ChatGPT got this far magically? OpenAI uses Human FeedBack Reinforcement Learning to teach the neural network what kind of expressions are appropriate and which ones are inappropriate.

Here is a 4-year-old 1-minute video explaining the technique.

For ChatGPT, the feedback was provided by Kenyans, and maybe they did not have as much awareness of child exploitation.

Clearly, there have been some gaps, and more work has to be done, but we have come very far already.

1

Jasrek t1_jbykaqc wrote

I hope you're right. I've never seen anything good happen when people start screaming 'think of the children' about new technology. I'll check back in with this thread in a year, see how things have gone.

2