Viewing a single comment thread. View all comments

ecnecn t1_jdqlr0w wrote

They need to design a Large Arithmetical Symbol Model where is predicts the next combination of arithmetical operators then LLM and LASM could coexist. Just lke GPT 4.0 and WolframAlpha

46

Independent-Ant-4678 t1_jdr0ksn wrote

An interesting thing crossed my mind while reading your answer. There is a disability called Dyscalculia which means that a person does not understand numbers, the person can learn that 7 + 3 = 10, but does not understand why. I have a relative who has this disability and to me it seems that people having this disability have poor reasoning abilities similar to current LLMs like GPT-4. They can learn many languages fluently, they can express their opinion on complex subjects, but they still have poor reasoning. My thinking is that, with the current LLMs we've already created the language center of the brain, but the mathematical center still needs to be created as that one will give the AI reasoning abilities (just like in people who don't have Dyscalculia)

40

Avid_Autodidact t1_jdsmy50 wrote

Fascinating! thanks for sharing.

I would imagine creating that "mathematical" part of the brain might involve a different approach than just predicting the next combination of arithmetic operators. As you put it someone learning 7+10 = 10 is similar to how LLMs work with the data they are trained on, whereas with something like Wolfram Alpha the methods of solving have to be programmed.

4

Ytumith t1_jduecob wrote

Poor reasoning as in general understanding or specific for maths and math-using natural sciences?

2

RadioFreeAmerika OP t1_jduhkmz wrote

Interesting, just voiced the same thought in a reply to another comment. I can totally see this being the case in one way or another.

1

MysteryInc152 t1_jdrpjd4 wrote

Sorry I'm hijacking the top comment so people will hopefully see this.

Humans learn language and concepts through sentences, and in most cases semantic understanding can be built up just fine this way. It doesn't work quite the same way for math.

When I look at any arbitrary set of numbers, I have no idea if they are prime or factors because they themselves don't have much semantic content. In order to understand whether they are those things or not actually requires to stop and perform some specific analysis on them learned through internalizing sets of rules that were acquired through a specialized learning process. Humans themselves don't learn math by just talking to one another about it, rather they actually have to do it in order to internalize it.

In other words, mathematics or arithmetic is not highly encoded in language.

The encouraging thing is that this does improve with more scale. GPT-4 is much much better than 3.5

10

ecnecn t1_jdruk43 wrote

Actually you can with Logic, Prolog wouldnt work otherwise. The basics of mathematics is logic equations. Propositional logic and predicative logic may express all math. rules and their application.

1

MysteryInc152 t1_jdruv58 wrote

I didn't say you couldn't. I said it's not highly encoded in language. Not everything that can be extracted from language can be extracted with the same ease.

3

ecnecn t1_jdrvfvr wrote

You are right just parts of mathematics are encoded like logic. It would need some hybrid system.

2

RadioFreeAmerika OP t1_jdqnm1k wrote

Hmm, now I'm interested in what would happen if you integrate the training sets before training, have some kind of parallel or two-step training process, or somehow merge two differently trained or constructed AIs.

5