Comments

You must log in or register to comment.

NineSwords t1_je0cw42 wrote

Looks like some people are getting reeeeally desperate to somehow get AI banned. What are the chances to read tomorrow that AI causes earth quakes and steals our children? Probably makes honest godfearing people gay as well.

16

outsidetheparty t1_je0epkp wrote

> after talking to a chatbot named ELIZA

Ummmmm……. this ELIZA?

[EDIT] ok I found a nonpaywalled summary; not that Eliza.

> The man apparently was so anxious about climate change he started to believe AI and tech were the only way out of disaster. During his conversations Eliza reenforced these ideas, and when the man proposed sacrificing himself if Eliza could save humanity, the chatbot encouraged him.

Sad story, but I’m not sure the AI is really the main character in it.

24

ersatzgiraffe t1_je0i3bz wrote

This guy could have been talking to a meatball and come to the same end.

18

Trout_Shark t1_je0m9lh wrote

I think the biggest problem with AI is going to be people and how they react to it. So much misinformation on what an AI actually can and cannot do is part of the problem.

2

Vallyth t1_je0mbl8 wrote

So utterly bizarre.

>"Without these conversations with the chatbot, my husband would still be here," the man's widow told La Libre. She and her late husband were both in their thirties, lived a comfortable life and had two young children.

>However, about two years ago, the first signs of trouble started to appear. The man became very eco-anxious and found refuge with ELIZA, the name given to a chatbot that uses OpenAI's ChatGPT technology, and is designed to generate human-like text and exchanges. After six weeks of intensive exchanges, he took his own life.

I'd like to see the chat logs between the two. In a matter of 6 weeks, he went from eco-anxious to taking his own life... and some how thinking it was rational to sacrifice himself so an AI would save humanity?

I would love to see what influences/propaganda he encountered that so drastically altered his perception of reality.

14

EyeLikeTheStonk t1_je0r0n9 wrote

Although I believe that the man would probably have committed suicide even without talking with an AI, people must be told that all those ChatBots are not superior intelligences, they are only as reliable as your facebook feed.

Provided with the right questions, any AI ChatBot can be made to deny the very existence of humans on Earth.

With the right questions, any AI ChatBot can tell you that dogs eat nothing by unicorn meat...

I think the problem with AI is that we call it "Artificial Intelligence" when the only intelligence found yet in AI resides in the brains of the programmers and code writers behind the projects.

Will we invent true "general artificial intelligence" one day? Maybe, but it is not yet here and probably won't be for another few decades.

And if we manage to ever invent true artificial intelligence, we must be prepared to also invent artificial psychiatrists because true AI will be prone to the same "mental illnesses" as everyone else, will be prone to lie, to greed, to laziness and even to anger... You could interact with a true AI only to find out that AI really doesn't like you and wants to kick your ass. /s

7

mov_eax_eax t1_je1dcqe wrote

It is bizarre because two years ago chatgpt didn't even exists, the story doesn't add up, it is like blame videogames and metal music all over again.

Unless there is proof that the interaction with the AI was related with the unfortunate events i need to call the story BS.

9

littleMAS t1_je26uc6 wrote

Where are the posts, "Without these purchases of firearms, my husband would still be here."?

−1

huh_say_what_now_ t1_je2c7cq wrote

He probably would have done the same after making some toast or watching some YouTube

0

Chopperoooo t1_je4tr8r wrote

chatgpt gave me some shitty code and I said "bro if that shit compiles i'm gonna jump out the window", and chatgpt says "well let's hope it works then" and i'm just like dude

1