Submitted by demauroy t3_11pimea in Futurology
JoshuaACNewman t1_jby0lu4 wrote
Yes and no. Eliza did a great job, too, just by repeating things back.
The problem with ChatGPT is that it knows a lot but doesn’t understand things. It’s effectively a very confident mansplainer. It doesn’t know what advice is good or bad — it just knows what people say is good or bad. It hasn’t studied anything in depth; or, more accurately, it doesn’t have the judgment to know what to study with remove and what to believe because it only knows what people say.
I say this because, just like autocomplete was suggesting to Corey Doctorow the other day that he ask his babysitter “Do you have time to come sit [on my face]?” It doesn’t know what’s appropriate for a situation. It only knows what people think is appropriate for a situation. It’s appropriate to ask someone to sit on your face when that’s your relationship. It’s not appropriate to ask the babysitter. “Sit” means practically opposite things here that are similar in almost every way except a couple critical ones.
[deleted] t1_jbyau48 wrote
[deleted]
demauroy OP t1_jbybmk7 wrote
I meant that real people hold a lot of opinions that is not backed by proper knowledge, just by applying a general principle that is not relevant to the conversation. Something like people mixing Radio emissions and radioactive emissions and being afraid of 5G waves (or wifi for that matter).
JoshuaACNewman t1_jbyeso0 wrote
I don’t understand your comment.
I’m not autistic. Are you saying that therapists should not have some remove from their patients?
demauroy OP t1_jbyb2rq wrote
Do people actually understand things more than a good AI model ? I think we create inference patterns very often that have no link to reality and are later refuted as absurd.
JoshuaACNewman t1_jbye0t1 wrote
If most of the time it’s advice that’s at least as good as therapeutic advice, and then it recommends self-harm because it’s what people do, it’s obviously not good for therapeutic purposes.
It’s not that even therapists don’t fuck up. It’s that AI doesn’t have any of the social structure we have. It can’t empathize because it has neither feelings nor experience, which means any personality you perceive is one you’re constructing in your mind.
We have ways and reasons to trust each other that AI can activate, but the signals are false.
ninjadude93 t1_jc02r6o wrote
People have the capacity to understand the meaning behind the words they say. Chatgpt does not and no AGI exists today that can.
I'd be incredibly wary of letting your teenager treat chatgpt as a real confidant without explaining its critical limitations
Viewing a single comment thread. View all comments