Submitted by Beautiful-Cancel6235 t3_11k1uat in singularity
phillythompson t1_jb5jysg wrote
Reply to comment by freeThePokemon246 in What might slow this down? by Beautiful-Cancel6235
And what limitations do you see with LLMs that wouldn’t be “solved” as time goes on?
Silly_Awareness8207 t1_jb5knx0 wrote
I'm no expert but the hallucination problem seems pretty difficult
HillaryPutin t1_jb5lqyr wrote
Just give it an antipsychotic
[deleted] t1_jb8niwt wrote
I don't see how any other architecture would solve that problem, that's just an issue of how current LLMs are trained
agsarria t1_jb6lvxn wrote
You can't rely on LLMs responses because it just tries to answer even if some things are made up, it's not accurate. Hard to change because that's just how it works.
Surur t1_jb78a7f wrote
Here is an interesting article addressing fixing LLM issues, including hallucinations.
https://towardsdatascience.com/overcoming-the-limitations-of-large-language-models-9d4e92ad9823
Viewing a single comment thread. View all comments