SylusTheRed t1_j7q76qf wrote on February 8, 2023 at 5:08 PM Reply to ChatGPT's 'jailbreak' tries to make the A.I. break its own rules, or die by QuicklyThisWay I'm going to go out on a limb here and say: "Hey, maybe lets not threaten and coerce AI into doing things" Then I remember we're humans and garbage and totally deserve the consequences Permalink 1
SylusTheRed t1_j7q76qf wrote
Reply to ChatGPT's 'jailbreak' tries to make the A.I. break its own rules, or die by QuicklyThisWay
I'm going to go out on a limb here and say: "Hey, maybe lets not threaten and coerce AI into doing things"
Then I remember we're humans and garbage and totally deserve the consequences