Submitted by Surur t3_10h4h7s in Futurology
adfjsdfjsdklfsd t1_j5a1x8y wrote
Many AI Security experts predicted this: "An unsafe AI is easier to develop than a safe AI".
Very troublesome.
Introsium t1_j5ax6rh wrote
Watching openAI’s best and brightest optimistically release ChatGPT only to have it “jailbroken” within hours by [checks notes] people asking it to do bad things “just as a joke bro” should be clear open-and-shut case: we are infants in AI safety, and we need to slow the fuck down because there are some mistakes where once is enough.
“Do not conjure up that which you cannot put back down” is basic wizard shit.
GreatStateOfSadness t1_j5awiuy wrote
Is that surprising, though? I can't think of too many complex systems that require extra work to become less safe. The first cars didn't start out with seatbelts and the first buildings didn't have emergency exits.
I'd be more shocked if the researchers came out and said "yeah, we trained an AI on an unfiltered dataset containing the totality of internet discourse, and all it does is post helpful product reviews and gush about its cats."
adfjsdfjsdklfsd t1_j5bi3wo wrote
It's not surprising at all, but that doesn't make it any less dangerous. If you are interested in the topic, you should search for "Tom Miles" on Youtube. He's an AI Safety researcher with an incredible way of explaining the most complex topics in laymans terms.
[deleted] t1_j5bxkfa wrote
[deleted]
n_thomas74 t1_j5auhf0 wrote
David Bowie predicted this with his song "Saviour Machine"
Viewing a single comment thread. View all comments