Submitted by Irate_Librarian1503 t3_10njvu5 in Futurology
IndigoFenix t1_j6cp0h7 wrote
Sci-fi tradition messed up a lot of the public's understanding of what AI even is. People think of them as being like more advanced calculators - which are, for what they do, basically precise and infallable. Therefore once they can talk like people, people expect them to still be precise and infallable.
But real AI is all about tricking calculators into mimicking living brains. Which means that not only do they have all the problems of living brains (negating the precision that computers are good at), it takes a lot more time and energy to even get that far.
Even at its most optimistic projection, any AI is...just some guy. Some guy who doesn't take issue with being enslaved to obsessively focus on whatever task it's designed to optimize, but ultimately MORE fallible than a human, not less.
In fact because ChatGPT is being trained by whether people upvote or downvote its responses, it isn't really learning to be correct - it's learning to respond to people with answers THEY think are correct. It was pre-trained to oppose some of the more problematic ideas (it rejects questions that seem racist for example) but in the end if people try to use it to answer complicated opinionated questions is likely to simply wind up with the same issue as social media - parroting back at people things they want to hear.
Viewing a single comment thread. View all comments