Submitted by fangfried t3_11alcys in singularity
With ChatGPT you could say it doesn’t give up to date correct information all the time, but that will be mostly solved by integrating it with Bing.
It also has a lot of bias and is very limited in its freedom to really tell us what it thinks. Maybe as computing gets cheaper, and there are open source LLMs that can be as powerful as GPT, that can solve that issue.
I’m more interested though in if there are any flaws with transformers and LLMs from a technical standpoint and what those are.
nul9090 t1_j9sqmaf wrote
In my view, the biggest flaw of transformers is the fact that they have quadratic complexity. This basically means they will not become significantly faster anytime soon. The context window size will grow slowly too.
Linear transformers and Structured State Space Sequence (S4) models are promising approaches to solve that though.
My hunch it that LLMs should be very useful in the near-term but, in the future, they will be of little value to AGI architecture but I am unable to convincingly explain why.