Submitted by valdanylchuk t3_zx7cxn in MachineLearning
In TED Interview on the future of AI from three months ago, Demis Hassabis says he spends most of his time on the problem of abstract concepts, conceptual knowledge, and approaches to move deep learning systems into the realm of symbolic reasoning and mathematical discovery. He says at DeepMind they have at least half a dozen internal prototype projects working in that direction:
https://youtu.be/I5FrFq3W25U?t=2550
Earlier, around the 28min mark, he says that while current LLMs are very impressive, they are nowhere near reaching sentience or consciousness, among other things, because they are very data-inefficient in their learning.
Can we infer their half dozen approaches to abstract reasoning from the research published by DeepMind so far? Or is this likely to be some yet unreleased new research?
DeepMind list many (not sure if all) of their papers here:
https://www.deepmind.com/research
I was able to find some related papers there, but I am not qualified to judge their significance, and I probably missed some important ones because of the less obvious titles.
https://www.deepmind.com/publications/symbolic-behaviour-in-artificial-intelligence
https://www.deepmind.com/publications/learning-symbolic-physics-with-graph-networks
Can anyone help summarize the approaches currently considered promising in this problem? Are we missing something bigger coming up behind all the hype around ChatGPT?
comefromspace t1_j1ytlcw wrote
I don't know but it seems like LLMs will get there faster as soon as they become multimodal. Language is already symbol manipulation.