Submitted by fangfried t3_11alcys in singularity
FpRhGf t1_j9xtnne wrote
Reply to comment by Additional-Escape498 in What are the big flaws with LLMs right now? by fangfried
Thanks! Well it's better than I thought. It still doesn't fix the limitations for the outputs I listed, but at least it's more flexible than what I presumed.
Additional-Escape498 t1_j9yorbo wrote
You’re definitely right that it can’t do those things, but I don’t think it’s because of the tokenization. The wordpieces do contain individual characters, so it is possible for a model to do that with the wordpiece tokenization it uses, but the issue is that the things you’re asking for (like writing a story with pig Latin) require reasoning and LLMs are just mapping inputs to a manifold. LLM’s can’t really do much reasoning or logic and can’t do basic arithmetic. I wrote an article about the limitations of transformers if you’re interested: https://taboo.substack.com/p/geometric-intuition-for-why-chatgpt
Viewing a single comment thread. View all comments