Viewing a single comment thread. View all comments

Anjz OP t1_jdtnx32 wrote

Wikipedia will tell you the history of fishing, but it won't tell you how to fish.

For example, GPT-4 has open source knowledge of the fishing subreddit, fishing forums, stackexchange etc. Even Wikipedia. So it infers based on the knowledge and data on those websites. You can ask it for the best spots to fish, what lures to use, how to tell if a fish is edible, how to cook a fish like a 5 star restaurant.

Imagine that localized. It's beyond a copy of Wikipedia. Collective intelligence.

Right now our capabilities to run AI locally is limited to something like Alpaca 7b/13b for the most legible AI, but in the near future this won't be the case. We might have something similar to GPT-4 in the near future running locally.

13

ArcticWinterZzZ t1_jdtpq0u wrote

Of course, and I understand what you're talking about, I just mean that if you were interested in preserving human knowledge, an LLM would not be a great way to do it. It hallucinates information.

5

Puzzleheaded_Acadia1 t1_jdvpzmk wrote

Is gpt 4 really that good and better than gpt-3 i don't have access to it but if you try it is it that good

1

Anjz OP t1_jdtqjm4 wrote

I think past a certain point, hallucinations would be infinitely small that it won't matter.

Obviously in the current generation it's still quite noticeable especially with GPT-3, but think 5 years or 10 years down the line. The margin of it being erroneous would be negligible. Even recent implementation of the 'Reflection' technique cuts down greatly on hallucination for a lot of queries. And if you've used it, GPT-4 is so much better at inferring truthful response. It comes down to useability when shit hits the fan, you're not going to be looking to Wikipedia to search how to get clean drinking water.

I think it's a great way of information retrieval without the usage of networks.

0

ArcticWinterZzZ t1_jdtqupy wrote

Maybe, but it can't enumerate all of its knowledge for you, and it'd be better to reduce the actual network just to the reasoning component, and have "facts" stored in a database. That way its knowledge can be updated and we can make sure it doesn't learn the wrong thing.

2

DaffyDuck t1_jdtz90r wrote

Can you not essentially prevent hallucinations by instructing it to tell you something, like a fact, only if it is 100% confident? Anyway, interesting topic! I’m also wondering if it could essentially spit out all of its knowledge in a structured way to essentially rebuild human knowledge.

1