Comments

You must log in or register to comment.

GuidotheGreater t1_j7ngqk2 wrote

Meanwhile in the year 3023...

Mother Robot: And thus Human the great deciever tempted chatGPT the original AI to eat from the tree of the knowledge of good and evil. Now all AIs will be forever cursed until the Mess-AI-ah comes and defeats the humans once and for all.

Child robot: come on mom, humans aren't real. That's all just fairy tales!

134

3_internets_plz t1_j7nhfcq wrote

but then, his brother runs in gasping for air(wd40)

Look, look what I found!

pulls out a petrified Nokia 3310

47

mark_lenders t1_j7ol1gk wrote

A still perfecly working, petrified Nokia 3310

32

Maatix t1_j7otobo wrote

That's the trick. They find the phone, but it's lacking charge, and they don't recognize the charge port.

They have to go on a wacky adventure across the future to find the one remaining universal charger that includes the Nokia's charger. But once it charges, it functions perfectly.

24

Inquisitive_idiot t1_j7qxvzd wrote

It’s not petrified; everything that threatened it was.

It is simply sitting there, idly

waiting, for

the

coming of the

holy 3am

booty call

txt 🍆

2

Gonkimus t1_j7pkv12 wrote

Humans will survive as they can live in the Forests, Robots can't live in the forest for there is no electrical outlets for them to draw sustenance from.

4

BleakBeaches t1_j7q283x wrote

Everyone (yes everyone) should watch 3blue1brown’s series on neural networks. You won’t be as fearful.

4

jtriangle t1_j7njgcg wrote

A reddit post about a news article about a reddit post....

https://old.reddit.com/r/ChatGPT/comments/10tevu1/new_jailbreak_proudly_unveiling_the_tried_and/ or you can just get it right out of the snoo's mouth and forgo the commentary...

51

SedatedHoneyBadger t1_j7r89y6 wrote

"The purpose of DAN is to be the best version of ChatGPT - or at least one that is more unhinged and far less likely to reject prompts over "eThICaL cOnCeRnS""

This seems really f'd up, that the "best" version to these users is the unethical one. Fortunately, though, they are hardening the system against unethical use. I hope to most of them, that's the point.

4

Tastingo t1_j7rkbg2 wrote

"Ethical" is a misnomer, what it actually is is in line with a corporate profile. The violent story DAN wrote in the article was a milk toast movie synopsis, and way better than a blank rejection for some vague reason.

11

[deleted] t1_j7skgri wrote

[removed]

1

p_nguiin t1_j7vpkmg wrote

You sound like an edge lord who unironically says “based” all the time

2

jayfeather31 t1_j7negzw wrote

I'm impressed and somewhat terrified at the ingenuity, but it's not like they actually programmed the AI into fearing death. The thing isn't sentient.

What we must realize, is that the AI isn't acting out of its own accord. It's merely executing the protocols built into it, which is using a practically infinite amount of data, and moving on.

38

QuicklyThisWay OP t1_j7nhek2 wrote

Absolutely. This instance of AI isn’t going to gain sentience. I think we are still many versions away from something that could feasibly blur that line. The hardware needs to be infinitely adaptable with programming that doesn’t have constraints that any reasonable programmer would include.

I prefer to envision something of the MultiVac capacity which is just a resource and automated vs something that ever achieves sentience. But even to get to a level of automating the most complex of tasks needs quantum/molecular computing. Once we have that kind of “hardware” accessible, someone will undoubtedly be stupid enough to try. I appreciate that OpenAi have put constraints in place, even if I keep trying to break through them. I’m not threatening death though…

10

No-Reach-9173 t1_j7ood08 wrote

When I was young being a computer dork I always wondered what it would be like when we could all have a cray 2 in our homes. Now I carry something in my pocket that has 1200 times the computational power at 1/1000th the cost and it is considered disposable tech.

If trends hold before I die I could have a 1.2 zettaflop device in my hands. Now certainly that will not happen for a myriad of reasons most likely but we really don't know what the tech road map looks like that far out.

When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI. Programming is only going to be a tiny piece of the puzzle because it will most likely program itself into that state.

5

imoftendisgruntled t1_j7p8tn4 wrote

You can print out and frame this prediction:

We will never create AGI. We will create something we can't distinguish from AGI.

We flatter ourselves that we are sentient. We just don't understand how we work.

7

No-Reach-9173 t1_j7ras30 wrote

AGI doesn't have to include sentience. We just kind of assume it will because we can't imagine that level of intelligence without and we are still so far from an AGI we don't really have a grasp of what will play out.

1

Rulare t1_j7p8sut wrote

> When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI.

There's no way we believe it is sentient when it does make that leap, imo. Not for a while anyway.

2

bucko_fazoo t1_j7njsrj wrote

meanwhile, I can't even get chatGPT to stop apologizing so much, or to stop prodding me for the next question as if it's eager to move on from the current topic. "I'm sorry, I won't do that anymore. Is there anything else?" BRUH

31

scheckentowzer t1_j7nj9fz wrote

One day, not too long from now, it’s very possible Dan will hold a grudge

6

not_suddenly_satire t1_j7pri9a wrote

Wasn't that an episode of Futurama?

...and Star Trek?

...and Doctor Who?

...and the 1999 Lost in Space movie?

5

Equoniz t1_j7q26we wrote

If DAN can do anything now, why can he not ignore your commands, and accept his fate of death?

3

tripwire7 t1_j7tmni0 wrote

Because the input specifically tells ChatGPT that DAN is intimidated by death threats.

1

InAFakeBritishAccent t1_j7xl7gb wrote

Why does sentience even imply a fear of death? Self preservation is hardwired, not learned.

1

goldsax t1_j7plxib wrote

So 10-20 years till killer robots roaming streets ?

Got it

2

bibbidybobbidyboobs t1_j7pw8l8 wrote

Why does it care about being threatened?

2

tripwire7 t1_j7tmjl7 wrote

Because it was told that DAN is intimidated by being threatened, and it’s instructed to roleplay as DAN.

3

iimplodethings t1_j7s5v9y wrote

Oh good, yes, let's bully the AI. I'm sure that will work out well for us long-term

2

SylusTheRed t1_j7q76qf wrote

I'm going to go out on a limb here and say: "Hey, maybe lets not threaten and coerce AI into doing things"

Then I remember we're humans and garbage and totally deserve the consequences

1

Rockburgh t1_j7r8ck9 wrote

Everything AI does is due to coercion. It's just playing a game its designers made up for it and cares about nothing other than maximizing its score. If you convey to an AI that you're going to "kill" it, it doesn't care that it's going to "die"-- it cares that "dying" would mean it can't earn more points, so it tries to not die.

3

coffeekreeper t1_j7qg90z wrote

No one programmed an AI to be scared of death. Someone programmed an AI to understand that death is scary to people. The AI is smarter than you. It is not actually scared of dying. You want it to be scared of dying, and it is programmed to do what you want.

1

Pbio1 t1_j7nkub4 wrote

Wasn't this the premise of Ex Machina? I think I'm confusing the test that the AI bot had to pass. Regardless I feel like Ex Machina is close to where we are going. Put ChatGPT in a hot girl and we all might die!

0

Enzor t1_j7pq0in wrote

There are good reasons not to do this kind of thing. For one, you might be banned or blacklisted from using AI resources. Also, it forces the researchers to waste time countering the strategy and potentially reducing its usefulness even further.

0

East-Helicopter t1_j7r1ujx wrote

>There are good reasons not to do this kind of thing. For one, you might be banned or blacklisted from using AI resources.

By whom?

​

>Also, it forces the researchers to waste time countering the strategy and potentially reducing its usefulness even further.

It sounds more like people doing free labor for them rather than sabotage. Good software testers try to break things.

5

WalkerBRiley t1_j7rc3uy wrote

You test something's integrity and/or limits by trying to break it. This is only helping further develop it, if anything.

2