Submitted by demauroy t3_11pimea in Futurology
Surur t1_jby70c0 wrote
> and the advice given, while not very original, was of very decent quality and quite fine-tuned to her situation.
This would worry you then:
https://twitter.com/tristanharris/status/1634299911872348160
Jasrek t1_jbyb6zj wrote
I mean, that is worrisome, but not for the reason you're implying.
This is how technology gets neutered to the point of complete uselessness.
"A program that can answer questions? But what if a child asks questions! They could ask any question at all and be given answers, even if the contextual nature of the question makes it inappropriate in ways a program can't possibly understand! Quick, it must be destroyed! Destroyed immediately for the sake of the children!"
I'm reminded of how people were worried that kids playing Dungeons & Dragons would result in them sacrificing their friends to Satan. What the heck is stopping the kid from googling "how to hide a bruise"? Literally nothing. I just did it, the first result is a 'how to' video on YouTube so I can be shown how to do it properly. Yet somehow this chat program is a horrible, terrible menace.
demauroy OP t1_jbybw01 wrote
I think it is important to find the right balance, I kind of understand ChatGPT has safety features not to explain to children how to make explosive with detergent at home.
But I would agree with you we may be on the too prudent side right now.
nonusedaccountname t1_jbz3fcc wrote
The issue here isn't that children can talk to it. In fact, it's probably a useful tool for teenagers to ask questions they could get in trouble for. Like sex education in more close-minded communities. The issue is that in the example the AI wasn't able to pick up on subtle context clues over multiple messages that a human could. If an adult were told those things they would know something is wrong and could help the child, while the AI can't, even if it would understand
Surur t1_jbyev3j wrote
You dont think the lack of awareness of what is appropriate for children is a risk when it comes to an AI as a confidant for a child?
We do a lot to protect children these days (e.g. background checks of anyone that has professional contact with them, appropriate safeguarding training etc) so it is appropriate to be careful with children who may not have good enough judgement.
Jasrek t1_jbyfvdq wrote
Not really, no.
I'm in my late thirties. I have no idea how old you or anyone else on Reddit is. You have given me no background check or safeguarding training. Some people in this thread might be kids, I have no idea.
Kids use each other as confidants. Do you background check the other 12-year olds?
Kids know how to use Google. What is the fundamental difference between going "How do I hide a bruise?" to a chat program and searching it on Google?
I think this is a knee-jerk reaction to an interesting new gadget and that there is literally no solution to the problem you are perceiving.
Consider the issue shown in the Twitter you linked. How would you fix this? Cause the chat program to shut down if you admit your age is under 18? Prevent it from responding to questions about bruises or physical injuries? Give the program a background check?
Surur t1_jbyh7ee wrote
Why do you keep talking about hiding a bruise? The tweet is about a 13-year-old child being abducted for out-of-state sex by a 30-year-old.
The issue is that a while ChatGPT may present as an adult, a real adult would have an obligation to make a report, especially if presented in a professional capacity (working for Microsoft or Snap for example).
I have no issue with ChatGPT working as a counsellor, but it will have to show appropriate professional judgement first, because, unlike a random friend or web page, if does represent Microsoft and OpenAI, including morally and legally.
Jasrek t1_jbyi94y wrote
It's two tweets down in the same thread by the same guy. Did you finish reading what you linked?
In my experience, ChatGPT very blatantly presents itself as a computer program. I've asked it to invent a fictional race for DND and it prefaced the answer by reminding me it was a computer program and has no actual experience with orcs.
If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.
If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.
Surur t1_jbyif2t wrote
> If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.
My concern is around children. A disclaimer would not help.
> If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.
I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.
Jasrek t1_jbyiwdw wrote
>My concern is around children. A disclaimer would not help.
Then I'm still questioning what you think would help. Your suggestions so far have been to imbue a computer program with professional judgement, an understanding of morality and ethics, and safeguarding training.
If you know how to do this, you've already invented AGI.
>I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.
You're more optimistic than I am. My expectation is that there will be a largely symbolic uproar because some kid was able to Google "how do I keep a secret" by using a chat program and nothing of any actual benefit to any children will occur.
Surur t1_jbyjw78 wrote
Do you think ChatGPT got this far magically? OpenAI uses Human FeedBack Reinforcement Learning to teach the neural network what kind of expressions are appropriate and which ones are inappropriate.
Here is a 4-year-old 1-minute video explaining the technique.
For ChatGPT, the feedback was provided by Kenyans, and maybe they did not have as much awareness of child exploitation.
Clearly, there have been some gaps, and more work has to be done, but we have come very far already.
Jasrek t1_jbykaqc wrote
I hope you're right. I've never seen anything good happen when people start screaming 'think of the children' about new technology. I'll check back in with this thread in a year, see how things have gone.
Low-Restaurant3504 t1_jbyvrwq wrote
So this is the new "Think of the children!!!" craze. Damn. And I thought we were gonna bring back the old D&D satanic panic again because it got so popular.
demauroy OP t1_jbya5oz wrote
It is not ChatGPT. I understand the team at ChatGPT has worked a lot in making the AI family-friendly / safe, even maybe too much.
Surur t1_jbyeden wrote
> It is not ChatGPT.
It is actually. OpenAI has licensed their AI to Snap.
https://www.cnbc.com/2023/02/27/snap-launches-ai-chatbot-powered-by-openais-gpt.html
Viewing a single comment thread. View all comments