Submitted by Irate_Librarian1503 t3_10njvu5 in Futurology
PhilGibbs7777 t1_j696g4n wrote
ChatGPT is not yet smart enough to spot fake news. The people who trained it went to a lot of effort to avoid it supporting anything controversial. The material used to train it must have been filtered to remove anything that would be considered fake news by the people who control it. Other bots in the near future will be able to reason more independently, but we will have to see if these will be allowed to be released to the public.
Irate_Librarian1503 OP t1_j69760p wrote
Which is not to say, that it could not spot clear lies. I asked it a lot of different questions about the stolen votes or stolen election with trump. Every time it told me that there was nothing stolen.
Environmental-Buy591 t1_j69re3z wrote
ChatGPT itself has a very real problem with being confidently wrong, I dont know if the next version will be better but it seems to be an on going problem. Fake news is convincing and AI does not have the intuition to know it is wrong. Look at the twitter bot that turned racist and had to be taken down by microsoft. I know that was a while ago but you can see a similar flaw in chatGPT still.
PhilGibbs7777 t1_j6a6khl wrote
Yes because that was the answer that it got from its training material. If it had been fed with mis-information saying that votes were stolen it would tell you that instead. It does not have the ability to weigh up evidence and reach its own conclusion, but that will probably be possible soon. Of course many people dont have a very strong ability to do that either.
Viewing a single comment thread. View all comments