Viewing a single comment thread. View all comments

Good-AI t1_je71baq wrote

Rote learning can still get you there. Because as you compress statistics and brute knowledge into smaller and smaller sizes, understanding needs to emerge.

For example, a LLM can memorize that 1+1=2, 1+2=3, 1+3=4,.... Until infinity. Then 2+1=3, 2+2=4,... Etc. But that results in a lot of data. So if the neural network is forced to condense that data, and keep the same knowledge about the world, it starts to understand.

It realizes that by just understanding why 1+1=2, all possible combinations are covered. By understanding addition. That compresses all infinife possibilities of additions into one package of data. This is what is going to happen with LLM and what chief scientist of Open AI said is already starting to happen. Source.

11