Viewing a single comment thread. View all comments

theglandcanyon OP t1_jdiat55 wrote

That might not be as serious a concern as it seems. One of the findings of the Microsoft team who just posted their paper about GPT-4 having "sparks" of AGI was that you could ask GPT-4 what the probability was of the correctness of each of its answers, and it gave very accurate answers. In other words, it knows when it doesn't know something and it will tell you that if you ask it.

0