[R] [P] New ways of breaking app-integrated LLMs with prompt injection github.com Submitted by taken_every_username t3_11bkpu3 on February 25, 2023 at 1:13 PM in MachineLearning 9 comments 52
KakaTraining t1_ja5u446 wrote on February 27, 2023 at 1:38 AM An attack case: I changed NewBing's name to KaKa instead of Sydney, which means that it is possible to break through Microsoft's more restrictions on new Bing. https://twitter.com/DLUTkaka/status/1629745736983408640 Permalink 2
Viewing a single comment thread. View all comments