Ok-Assignment7469 t1_j9g51o4 wrote
Reply to comment by cat_91 in [D] Maybe a new prompt injection method against newBing or ChatGPT? Is this kind of research worth writing a paper? by KakaTraining
These models are mainly based on reinforcement learning and the goal is to give you an answer which makes u happy the most. If you keep bugging it , eventually it will tell you the password at some point, because you are asking for it , and the bot s main goal is to satisfy your questions with probability and not reasoning because it was not designed to have a reasonable behavior
Viewing a single comment thread. View all comments