Cheap_Meeting t1_j21v6mi wrote
Reply to comment by All-DayErrDay in [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
I think the main limitations of LLMs are:
- Hallucinations: They will make up facts.
- Alignment/Safety: They will sometimes give undesirable outputs.
- "Honesty": They cannot make reliable statements about their own knowledge and capabilities.
- Reliability: They can perform a lot of tasks, but often not reliably.
- Long-context (& lack of memory): They cannot (trivially) be used if the input size exceeds the context length.
- Generalization: They often require task-specific finetuning or prompting.
- Single modality: They cannot easily perform tasks on audio, image, video.
- Input/Output paradigm: It is unclear on how to use them for tasks which don't have a specific inputs and outputs (e.g. tasks which require taking many steps).
- Agency: LLMs don't act as agents which have their own goals.
- Cost: Both training and inference incur significant cost.
Flag_Red t1_j22aiul wrote
Only #1 here really relates to their symbolic reasoning capabilities. It does imply that symbolic reasoning is a secondary objective for the models, though.
Viewing a single comment thread. View all comments