Viewing a single comment thread. View all comments

Anjz OP t1_jdtqjm4 wrote

I think past a certain point, hallucinations would be infinitely small that it won't matter.

Obviously in the current generation it's still quite noticeable especially with GPT-3, but think 5 years or 10 years down the line. The margin of it being erroneous would be negligible. Even recent implementation of the 'Reflection' technique cuts down greatly on hallucination for a lot of queries. And if you've used it, GPT-4 is so much better at inferring truthful response. It comes down to useability when shit hits the fan, you're not going to be looking to Wikipedia to search how to get clean drinking water.

I think it's a great way of information retrieval without the usage of networks.

0

ArcticWinterZzZ t1_jdtqupy wrote

Maybe, but it can't enumerate all of its knowledge for you, and it'd be better to reduce the actual network just to the reasoning component, and have "facts" stored in a database. That way its knowledge can be updated and we can make sure it doesn't learn the wrong thing.

2

DaffyDuck t1_jdtz90r wrote

Can you not essentially prevent hallucinations by instructing it to tell you something, like a fact, only if it is 100% confident? Anyway, interesting topic! I’m also wondering if it could essentially spit out all of its knowledge in a structured way to essentially rebuild human knowledge.

1