Submitted by Irate_Librarian1503 t3_10njvu5 in Futurology
Irate_Librarian1503 OP t1_j69760p wrote
Reply to comment by PhilGibbs7777 in Why not use chat gpt to spot obvious fake news? by Irate_Librarian1503
Which is not to say, that it could not spot clear lies. I asked it a lot of different questions about the stolen votes or stolen election with trump. Every time it told me that there was nothing stolen.
Environmental-Buy591 t1_j69re3z wrote
ChatGPT itself has a very real problem with being confidently wrong, I dont know if the next version will be better but it seems to be an on going problem. Fake news is convincing and AI does not have the intuition to know it is wrong. Look at the twitter bot that turned racist and had to be taken down by microsoft. I know that was a while ago but you can see a similar flaw in chatGPT still.
PhilGibbs7777 t1_j6a6khl wrote
Yes because that was the answer that it got from its training material. If it had been fed with mis-information saying that votes were stolen it would tell you that instead. It does not have the ability to weigh up evidence and reach its own conclusion, but that will probably be possible soon. Of course many people dont have a very strong ability to do that either.
Viewing a single comment thread. View all comments