Captain_Clark
Captain_Clark t1_j99bx6b wrote
This is nothing new. ELIZA had similar effect upon users decades ago, despite its far cruder capabilities at language construction.
>>Shortly after Joseph Weizenbaum arrived at MIT in the 1960s, he started to pursue a workaround to this natural language problem. He realized he could create a chatbot that didn’t really need to know anything about the world. It wouldn’t spit out facts. It would reflect back at the user, like a mirror.
>> Weizenbaum had long been interested in psychology and recognized that the speech patterns of a therapist might be easy to automate. The results, however, unsettled him. People seemed to have meaningful conversations with something he had never intended to be an actual therapeutic tool. To others, though, this seemed to open a whole world of possibilities.
>> Weizenbaum would eventually write of ELIZA, “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
ChatGPT is lightyears beyond ELIZAs capabilities. But Weizenbaum’s concerns remain, and it’s how we got here; to a point where you are entranced in exactly the same way ELIZA’s users were.
Captain_Clark t1_j8x3v5r wrote
Reply to comment by donniedenier in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
Which is fine. I merely wish to suggest to you, that if you consider ChatGPT to be intelligent, you devalue your own intelligence and your reason for having it.
Because by your own description, you’ve already implied that ChatGPT is more intelligent than you.
So I’d ask: Do you really want to believe that a stack of code is more intelligent than you are? It’s just a tool, friend. It only exists as human-created code, and it only does one thing: Analyze and construct human language.
Whereas, you can be intelligent without using language at all. You can be intelligent by simply and silently looking at another person’s face.
And the reason I’m telling you this is because I consider it dangerous to mistake ChatGPT for intelligence. That’s the same fear you describe: The devaluing of humanity, via the devaluing of human labor. But human labor is not humanity. If it were so, we could say that humans who do not work are not intelligent - even though most of us would be perfectly happy if we didn’t have to work. Which is why we created ChatGPT in the first place.
It once required a great deal of intelligence to start a fire. Now, you may start a fire by easily flicking a lighter. That didn’t make you less intelligent than a lighter.
Captain_Clark t1_j8w8dcq wrote
Reply to comment by donniedenier in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
Correct, it is not sentient.
Now consider: Every organic intelligence is sentient. That’s because intelligence evolved for a reason: the sentience which enables the organism to survive.
Sentience is the foundation upon which intelligence has evolved. It is the necessary prerequisite for intelligence to exist. That holds true for every level of intelligence in every living creature. Without sentience, there’s no reason for intelligence.
So it’s quite a stretch to consider intelligence with no foundation nor reason to be intelligence at all. It’s something. But it’s not intelligence. And ChatGPT has no reason, other than our own.
You can create a website for me. But unless you have a reason to, you won’t. That is intelligence.
Captain_Clark t1_j8w5rrr wrote
Reply to comment by donniedenier in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
ChatGPT is so intelligent it could be running on a server that’s literally on fire and ChatGPT wouldn’t know it.
It’s a pretty narrow definition of “intelligence” to suggest it includes no awareness of oneself or the world at all.
If I was on fire and about to be run over by a train while I strung together text I’d found on the internet and babbled it at you, you’d likely not think: “Wow, that guy sure is intelligent”.
Captain_Clark t1_j8dv49m wrote
Reply to comment by xdetar in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
It’s truthful though. Why would one forgive a machine, which lacks the ability to suffer?
There’s been a lot of speculation, misunderstanding and misrepresentation on this matter of “Artificial Intelligence” in light of GPT developments lately. What there hasn’t been is any discussion about Artificial Sentience, which is a profoundly organic phenomenon that guides organic intelligence.
I guess I’m saying that Intelligence without Sentience isn’t really Intelligence at all. No non-sentient thing has intelligence.
GPT’s shortcomings are extremely evident in OP’s conversation because emotional intelligence is necessary in order to have anything we may call “intelligence” at all, and I blame the marketers of such tech for promoting such a shallow, laden, Sci-Fi term.
As for “chatting” with a GPT, that’s like talking to a toy.
“Woody, I hate the kid next door.”
>>”There’s a snake in my boot, partner!”
Submitted by Captain_Clark t3_10osbp1 in Showerthoughts
Captain_Clark t1_j3m7kh6 wrote
Reply to comment by Excludos in Drones Are Already Delivering Pizza, If You Haven't Noticed by the_remainder_17
Yeah, I’m wondering about people who live in apartment buildings. A drone can’t simply drop a pizza into their parking lot.
Captain_Clark t1_j2v70uk wrote
Reply to comment by carbonxe in When faced with a choice conflict, individuals who consume alcohol may be nudged into selecting more expensive branded alcoholic beverages. by [deleted]
I’m not sure what a “choice conflict” is. There are about 15 brands of beer at my local convenience store. I buy a fairly cheap brand which I’m accustomed to. If they lack that, I step up in quality instead of down because I’d already eliminated the cheaper brands. So is that the “choice conflict”?
Captain_Clark t1_iuirggn wrote
Reply to comment by alexalexalex09 in Tipping is how companies trick us into paying for their goods and their payroll by yodal-io
Yes, it’s a strange arrangement. One may certainly earn more in tips than min wage would provide. But the determining factor in a tip’s amount is the price of the meal, which has no direct correlation to the degree of service provided by the wait staff.
A server may devote the same amount of effort in waiting tables at a pricey restaurant or an inexpensive diner, but the diner tips will pay less.
Obviously, the process is intended to allow higher-priced restaurants to hire the best, most attentive and excellent wait staff. But there’s a big presumption here, that a good waiter at Denny’s will naturally be waiting tables at a five-star steakhouse some day.
The server at Denny’s may work three times harder than the server at Ruth’s Chris, but earn a comparative fraction of pay in tips. The server at Denny’s has no control over this, despite how good a server they are - and a tip is supposed to reflect the quality of their labor.
A bad chef can destroy a server’s tips. Unclean bathrooms may destroy a server’s tips. A lengthy wait for seating may do so. A customer may simply be a cheapskate or even a racist creep who doesn’t like to tip certain people. These are circumstances which have nothing to do with how well the server does their job.
Captain_Clark t1_iuh7ol9 wrote
Reply to comment by Turbot_charged in Tipping is how companies trick us into paying for their goods and their payroll by yodal-io
It only applies to wait staff. It’s a loophole of some sort in the US. I don’t know why it exists.
Captain_Clark t1_iugmq4l wrote
Reply to comment by RosebudDelicious in Tipping is how companies trick us into paying for their goods and their payroll by yodal-io
Problem is, servers are paid less than minimum wage. So a tip is compensation for that.
Captain_Clark t1_iuglzo4 wrote
Captain_Clark t1_iu4r8mo wrote
Reply to comment by Lithuim in ELI5:A child causes a wagon to accelerate by pulling it with a horizontal force. newton's third law says that the wagon exerts an equal and opposite force on the child. how can the wagon accelerate? by Gbo_the_beast
With enough children and wagons, we can reverse the earth’s rotation.
Captain_Clark t1_j9ay4xy wrote
Reply to comment by Master00J in Guys am I weird for being addicted to chatgpt ? by Transhumanist01
What you’re describing is also what those who’d supported the idea that an “electronic therapist” may provide benefits to a suffering person have suggested.
There are indeed possibilities here; though I’d say there seem as many pratfalls.
You are correct in saying that a cognitive therapist is a listener. But they’re a trained, professional listener, who is attuned to the nuances of sentience. A cognitive therapist will listen so well that they’ll be able to point out things you’ve repeated, associations you’d made, and indicate these to you.
eg: “You’ve mentioned your mother every time you’ve described the difficulties in your relationships.” or “You’ve mentioned your uncle three times and began fidgeting with your clothing. What can you tell me about him?”
So yes, it’s a job of listening. But it’s listening very attentively, and also watching a patient as they become tense, or struggle for words. It’s observing. The reason that therapist is a highly trained observer is because we don’t observe ourselves, don’t recognize our own problematic patterns. Because maybe that uncle molested the patient and the patient is repressing the memories, while still suffering from them.
A Chatbot may be a good venue for ourselves to vent our feelings and maybe for us to recognize some of our patterns though I suspect we’d not do that very well because we’re basically talking to ourselves, while a bot which can’t see us and has no sentience responds to our prompts. We already can’t see our patterns. Nor will ChatGPT, which does not retain previous chats. One could write the same irrational obsession to ChatGPT every day, and ChatGPT will never recognize an obsession exists.
It’s writing therapy, I suppose. But does it provide guidance? And can it separate our good ideas from our harmful ones? I’m doubtful about that and if it could be trained to, such a tool could actually be employed as a brain-washing machine. I don’t consider that hyperbole: Imagine the Chinese government mandating that its citizens speak with a government Chatbot. They already have “re-education” camps and “behavioral ranking” systems.
I’m reminded of this scene.