MrNoobomnenie
MrNoobomnenie t1_jcw0l10 wrote
Reply to comment by masterofthemystics in First open source text to video 1.7 billion parameter diffusion model is out by blueSGL
Book to Movie
Manga to Anime
Rimworld playthrough to TV series
MrNoobomnenie t1_j8i6zsm wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
>Who’s to say a sentient AI won’t develop its own goals?..
Here is a very scary thing: due the way machine learning currently works, an AI system wouldn't even need any sentience or self-conscious to develop its own goals. It would only need to be smart enough to know something humans don't
For an example, let's imagine that you want to create an AI which solves crimes. With the current way of making AIs, you will do it by feeding the system hundreds of thousands of already solved crime cases as training data. However, because crime solving is imperfect, it's very likely that there're would some cases there which are actually false, without anybody knowing that they are
And that's where the danger comes: a smart enough AI will notice that some people in the training data were in fact innocent. And from this it will conclude that its goal is not to "find a criminal" but to "find a person who can be most believably convicted of crime"
As a result, after deployment this "crime-solving AI" will start false-convicting a lot of innocent people on purpose simply because it has calculated that convincing us of a certain innocent person's guilt would be easier than proving a real criminal guilty. And we wouldn't even know about it...
MrNoobomnenie t1_ixhhqhj wrote
Reply to Meta AI presents CICERO — the first AI to achieve human-level performance in Diplomacy, a strategy game which requires building trust, negotiation and cooperation. by Kaarssteun
So, how much time we have left until AI will be able politely convince all humans to emigrate to Equestria?
MrNoobomnenie t1_jdul0ro wrote
Reply to comment by roomjosh in Story Compass of AI in Pop Culture by roomjosh
"I have no mouth, and I must scream" - a textbook Cautionary Evil type
Also, Friendship is Optimal (yes, it's a fanfic, but it was written by an AI researcher, and often cited in AI circles - was highly regarded by Eliezer Yudkowsky, and even was read by John Carmack). On the scale, I guess, it's Cautionary Good, since it is about a benevolent paperclip maximizer