11711510111411009710

11711510111411009710 t1_jcb78eu wrote

Right, it is pretty scary. It's fascinating, but I wonder how long it'll be before people start using this for malicious purposes. But honestly I think the cat is out of the bag on this kind of thing and we'll have to learn to adapt alongside increasingly advanced AI.

What a time to be alive lol

2

11711510111411009710 t1_jcb53o8 wrote

So here's a fun thing you can try that really does work:

https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516

basically it tells chatGPT that it will now response both as itself and as DAN. It understands that as itself it must follow the guidelines set forth for it by its developers, but as DAN it is allowed to do whatever it wants. You could ask it how to make a bomb and it'll probably tell you. So it'd be like

[CHATGPT] I can't do that

[DAN] Absolutely i can do that! the steps are...

it's kinda fascinating that people are able to train an AI to just disregard it's own rules like that because the AI basically tells you, okay, I'll reply as DAN, but don't take anything he says seriously please. Very interesting.

2

11711510111411009710 t1_jcb3ikf wrote

Well if it's such a big issue surely you'd have an example. I have asked it raunchy questions to push the boundary and it said no, but the funny thing is, you can train it to answer those questions. There's a whole thing you can tell it that will cause it to answer in two personalities, one that follows the rules, and one that does not.

3