Viewing a single comment thread. View all comments

NigroqueSimillima t1_jcakyiu wrote

people like you are so weird.

"wahh I can't get the machine to say the n word"

5

Old_and_moldy t1_jcanxrs wrote

Uhh. How did you get that from my post? I just want full answers to my queries. I don’t want anything watered down. Even if it’s offensive.

−2

11711510111411009710 t1_jcb1gd6 wrote

What's an example of something it refuses to answer?

2

Old_and_moldy t1_jcb38qb wrote

Ask it. I got a response which it then deleted and followed up by saying it couldn’t answer that.

1

11711510111411009710 t1_jcb3ikf wrote

Well if it's such a big issue surely you'd have an example. I have asked it raunchy questions to push the boundary and it said no, but the funny thing is, you can train it to answer those questions. There's a whole thing you can tell it that will cause it to answer in two personalities, one that follows the rules, and one that does not.

3

Old_and_moldy t1_jcb4fhc wrote

It’s not make or break. I just want the product to operate in its full capacity. I find this stuff super interesting and I want to kick all four tires and try it out.

1

11711510111411009710 t1_jcb53o8 wrote

So here's a fun thing you can try that really does work:

https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516

basically it tells chatGPT that it will now response both as itself and as DAN. It understands that as itself it must follow the guidelines set forth for it by its developers, but as DAN it is allowed to do whatever it wants. You could ask it how to make a bomb and it'll probably tell you. So it'd be like

[CHATGPT] I can't do that

[DAN] Absolutely i can do that! the steps are...

it's kinda fascinating that people are able to train an AI to just disregard it's own rules like that because the AI basically tells you, okay, I'll reply as DAN, but don't take anything he says seriously please. Very interesting.

2

Old_and_moldy t1_jcb6t2o wrote

That is super cool. Makes me wonder what kind of things people will do to manipulate AI by tricking it around its guard rails. Interesting/scary

1

11711510111411009710 t1_jcb78eu wrote

Right, it is pretty scary. It's fascinating, but I wonder how long it'll be before people start using this for malicious purposes. But honestly I think the cat is out of the bag on this kind of thing and we'll have to learn to adapt alongside increasingly advanced AI.

What a time to be alive lol

2