Viewing a single comment thread. View all comments

PM_ME_ENFP_MEMES t1_jcjubn0 wrote

I read something about LLMs and why they’re so bad at math: during the tokenisation process, numbers don’t automatically get tokenised as the actual number. So, 67 may be tokenised as a token representing ‘67’ and all would be well.

However, it’s also likely that 67 may be tokenised as being two tokens, ‘6’ and ’7’, which may confuse the bot if it’s asked to do 67^2.

0

Available_Lion_652 t1_jcjukim wrote

Yes, there is currently a fix for this problem. In Llamas paper they splited numbers into digits 12345 became 1 2 3 4 5 29 December became 2 9 December.

It helps with addition, subtracting but not with complex reasoning

3