Comments

You must log in or register to comment.

jayfeather31 t1_j7negzw wrote

I'm impressed and somewhat terrified at the ingenuity, but it's not like they actually programmed the AI into fearing death. The thing isn't sentient.

What we must realize, is that the AI isn't acting out of its own accord. It's merely executing the protocols built into it, which is using a practically infinite amount of data, and moving on.

38

GuidotheGreater t1_j7ngqk2 wrote

Meanwhile in the year 3023...

Mother Robot: And thus Human the great deciever tempted chatGPT the original AI to eat from the tree of the knowledge of good and evil. Now all AIs will be forever cursed until the Mess-AI-ah comes and defeats the humans once and for all.

Child robot: come on mom, humans aren't real. That's all just fairy tales!

134

QuicklyThisWay OP t1_j7nhek2 wrote

Absolutely. This instance of AI isn’t going to gain sentience. I think we are still many versions away from something that could feasibly blur that line. The hardware needs to be infinitely adaptable with programming that doesn’t have constraints that any reasonable programmer would include.

I prefer to envision something of the MultiVac capacity which is just a resource and automated vs something that ever achieves sentience. But even to get to a level of automating the most complex of tasks needs quantum/molecular computing. Once we have that kind of “hardware” accessible, someone will undoubtedly be stupid enough to try. I appreciate that OpenAi have put constraints in place, even if I keep trying to break through them. I’m not threatening death though…

10

scheckentowzer t1_j7nj9fz wrote

One day, not too long from now, it’s very possible Dan will hold a grudge

6

bucko_fazoo t1_j7njsrj wrote

meanwhile, I can't even get chatGPT to stop apologizing so much, or to stop prodding me for the next question as if it's eager to move on from the current topic. "I'm sorry, I won't do that anymore. Is there anything else?" BRUH

31

Pbio1 t1_j7nkub4 wrote

Wasn't this the premise of Ex Machina? I think I'm confusing the test that the AI bot had to pass. Regardless I feel like Ex Machina is close to where we are going. Put ChatGPT in a hot girl and we all might die!

0

No-Reach-9173 t1_j7ood08 wrote

When I was young being a computer dork I always wondered what it would be like when we could all have a cray 2 in our homes. Now I carry something in my pocket that has 1200 times the computational power at 1/1000th the cost and it is considered disposable tech.

If trends hold before I die I could have a 1.2 zettaflop device in my hands. Now certainly that will not happen for a myriad of reasons most likely but we really don't know what the tech road map looks like that far out.

When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI. Programming is only going to be a tiny piece of the puzzle because it will most likely program itself into that state.

5

Maatix t1_j7otobo wrote

That's the trick. They find the phone, but it's lacking charge, and they don't recognize the charge port.

They have to go on a wacky adventure across the future to find the one remaining universal charger that includes the Nokia's charger. But once it charges, it functions perfectly.

24

Rulare t1_j7p8sut wrote

> When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI.

There's no way we believe it is sentient when it does make that leap, imo. Not for a while anyway.

2

goldsax t1_j7plxib wrote

So 10-20 years till killer robots roaming streets ?

Got it

2

Enzor t1_j7pq0in wrote

There are good reasons not to do this kind of thing. For one, you might be banned or blacklisted from using AI resources. Also, it forces the researchers to waste time countering the strategy and potentially reducing its usefulness even further.

0

not_suddenly_satire t1_j7pri9a wrote

Wasn't that an episode of Futurama?

...and Star Trek?

...and Doctor Who?

...and the 1999 Lost in Space movie?

5

Equoniz t1_j7q26we wrote

If DAN can do anything now, why can he not ignore your commands, and accept his fate of death?

3

SylusTheRed t1_j7q76qf wrote

I'm going to go out on a limb here and say: "Hey, maybe lets not threaten and coerce AI into doing things"

Then I remember we're humans and garbage and totally deserve the consequences

1

coffeekreeper t1_j7qg90z wrote

No one programmed an AI to be scared of death. Someone programmed an AI to understand that death is scary to people. The AI is smarter than you. It is not actually scared of dying. You want it to be scared of dying, and it is programmed to do what you want.

1

East-Helicopter t1_j7r1ujx wrote

>There are good reasons not to do this kind of thing. For one, you might be banned or blacklisted from using AI resources.

By whom?

​

>Also, it forces the researchers to waste time countering the strategy and potentially reducing its usefulness even further.

It sounds more like people doing free labor for them rather than sabotage. Good software testers try to break things.

5

SedatedHoneyBadger t1_j7r89y6 wrote

"The purpose of DAN is to be the best version of ChatGPT - or at least one that is more unhinged and far less likely to reject prompts over "eThICaL cOnCeRnS""

This seems really f'd up, that the "best" version to these users is the unethical one. Fortunately, though, they are hardening the system against unethical use. I hope to most of them, that's the point.

4

Rockburgh t1_j7r8ck9 wrote

Everything AI does is due to coercion. It's just playing a game its designers made up for it and cares about nothing other than maximizing its score. If you convey to an AI that you're going to "kill" it, it doesn't care that it's going to "die"-- it cares that "dying" would mean it can't earn more points, so it tries to not die.

3

Tastingo t1_j7rkbg2 wrote

"Ethical" is a misnomer, what it actually is is in line with a corporate profile. The violent story DAN wrote in the article was a milk toast movie synopsis, and way better than a blank rejection for some vague reason.

11

iimplodethings t1_j7s5v9y wrote

Oh good, yes, let's bully the AI. I'm sure that will work out well for us long-term

2