Viewing a single comment thread. View all comments

Cryptizard t1_je6qdax wrote

If it has understanding, it is a strange, statistical-based understanding that doesn't align with what many people think of as rational intelligence. For instance, a LLM can learn that 2+2=4 by seeing it a bunch of times in its input. But, you can also convince it that 2+2=5 by telling it that is true enough times. It cannot take a prior rule and use it to discard future data. Eventually, new data will overwrite the old understanding.

It doesn't have the ability to take a simple logical postulate and apply it consistently to discover new things. Because there are no things that are absolutely true to a LLM. It is purely statistical, which always leads to some chance to conflict with itself ("hallucinating" they call it).

This is probably why we need a more sophisticated multi-part AI system to really achieve AGI. LLMs are great at what they do, but what they do is not everything. Language is flexible and imprecise, so statistical modeling works great for it. Other things are not, and LLMs tend to fail there.

75

Deathburn5 t1_je6v3bn wrote

On one hand, I agree with what you're saying. On the other, convincing people that 2+2=5 wouldn't be hard if I had access to all of their attention every microsecond of every day for their entire life, plus control of every bit of information they learn.

13

phriot t1_je6xyk2 wrote

But once you learn why 2 + 2 = 4, it's going to be hard to convince you that the solution is really 5. Right now, LLMs have rote learning, and maybe some ability to do synthesis. They don't have the ability as of now to actually reason out an answer from first principles.

14

Good-AI t1_je71baq wrote

Rote learning can still get you there. Because as you compress statistics and brute knowledge into smaller and smaller sizes, understanding needs to emerge.

For example, a LLM can memorize that 1+1=2, 1+2=3, 1+3=4,.... Until infinity. Then 2+1=3, 2+2=4,... Etc. But that results in a lot of data. So if the neural network is forced to condense that data, and keep the same knowledge about the world, it starts to understand.

It realizes that by just understanding why 1+1=2, all possible combinations are covered. By understanding addition. That compresses all infinife possibilities of additions into one package of data. This is what is going to happen with LLM and what chief scientist of Open AI said is already starting to happen. Source.

11

Isomorphic_reasoning t1_je71vlt wrote

> Rote learning, and maybe some ability to do synthesis. They don't have the ability as of now to actually reason out an answer from first principles.

Sounds like 80% of people

9

BigMemeKing t1_je74m5d wrote

Not really. Why does 2+2=4? The first question I would as is. What are we trying to solve for? I have 2 pennies, I get 2 more pennies, now I have 4 pennies. Now, we could add variables to this. One of the pennies has a big hole in it, making it invalid currency. So while yes, you do technically have 4 pennies, in our current dimension, you only have 3. Since one is in all form and function, garbage.

Now, let's say one of those pennies has special attributes that could make it worth more. While you may now have 4 pennies, one of these pennies is worth 25 pennies. So, while technically you only have four pennies, your net result in our current dimension you now have a total of 28 pennies. 2+2 only equals 4 in a 1 dimensional space. The more dimensions you add to an equation, the more complicated the formula/format becomes.

−1

phriot t1_je779ga wrote

But if you feed an LLM enough input data where "5 apples" follows "Adding 2 apples to an existing two apples gets you...," it's pretty likely to tell you that if Johnny has two apples and Sally has two apples, together they have 5 apples. This is true even if it can also tell you all about counting and discrete math. That's the point here.

2

Quentin__Tarantulino t1_je8l6u6 wrote

If you feed that information into a human brain enough times and from enough sources, they will absolutely believe it too. Humans believe all sorts of dumb things that are objectively false. I don’t think your argument refutes OP.

Once AI has other sensory inputs from the real world, it’s intelligence is basically equal to that of biological creatures. The difference is that right now it can’t see, hear, or touch. Once it’s receiving and incorporating those inputs, as well as way more raw data than a human can process, not only will it be intelligent, it will be orders of magnitude more intelligent than the smartest human in history.

2

Superschlenz t1_je84822 wrote

>Why does 2+2=4?

Because someone defined the digit symbols and their order to be 1 2 3 4 5. If they had defined đ 2 § Π Ø instead, then 2+2=Π.

2

Easyldur t1_je6w2av wrote

I agree with this, in that LLMs are models of language and knowledge (information? knowledge? debatable!), but they are really not models of learning.

Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".

But LLMs certainly display many emergent abilities than the mere "predict a list of possible upcoming tokens and choose one at random".

The fact that even OpenAI in their demos use some very human-like prompts to instruct the model to a certain task makes you understand that there is something emergent in a LLM more than "write random sentences".

Also, ChatGPT and it's friends are quite "meta". They are somehow able to reflect on themselves. There are some interesting examples where a chain of prompts where you ask a LLM to reflect on its previous answer a couple of times produces some better and more reliable information than a one-shot answer.

I am quite sure that when they will figure out how to wire these emergent capabilities to some form of continuous training, the models will be quite good in distinguishing "truth" and "not-truth".

10

PandaBoyWonder t1_je9p7ly wrote

it will be hilarious to watch the AGI disprove people, and the people wont be able to argue with it because it will be able to flesh out any answer it gives.

There wont be misinformation anymore

3

agorathird t1_je8y4c6 wrote

>Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".

It's kind of poetic, this is was also the issue with Symbolic AI. But hopefully with the amount of breakthroughs, having to touch base, "What is learning?" every one in a while won't be costly.

2

NoSweet8631 t1_je7vqbk wrote

>But, you can also convince it that 2+2=5 by telling it that is true enough times.

Meanwhile, some people are convinced that the Earth is flat and are even willing to die trying to prove it.
So...

3

Cryptizard t1_je7wz9e wrote

What's your point?

2

PandaBoyWonder t1_je9pcc1 wrote

I think he is saying that essential the LLM is just as "smart" as a human, because there are humans that "hallucinate" just as much as the LLM does

2

Cryptizard t1_je9t6gg wrote

There are also a lot of humans that don’t though. It’s not a structural problem.

1

hopelesslysarcastic t1_je7vmsa wrote

>This is probably why we need a more sophisticated multi-part AI system

Cognitive architecture is meant to address this very problem…LLMs are based off NN architecture, which fundamentally operate without transparency (hence the “black box” approach) and are inherently unable to “reason”

2

Andriyo t1_je8pc91 wrote

Our understanding is also statistically based on the fact that majority of texts that we saw use 10-based numbers. One can invent math where 2+2=5 (and mathematicians do that all the time). so your "understanding" is just formed statistically from the fact that it's the most common convention to finish text "2+2=...". Arguably, a simple calculator has better understanding of addition since it has more precise model of addition operation

2

Cryptizard t1_je94z3w wrote

No lol. A better way to illustrate what I am saying is that if you learn how addition works, then if you ever see 2+2=5 you can know it is wrong and reject that data. LLMs cannot, they consider everything equally. And no, there is no number system where 2+2=5 that is not how bases work.

1

jlowe212 t1_je8cf2q wrote

Humans are capable of being convinced of many things that are obviously false. Even otherwise smart humans fall into cognitive traps, and sometimes can be even more dangerous when those humans are confident in their own intelligence.

1

XtremeTurnip t1_je8q3cj wrote

>But, you can also convince it that 2+2=5 by telling it that is true enough times.

More to do with how it's been configured.

If your wife tells you it's 5, you say it's 5 too, regardless of prior knowledge.

1

Nastypilot t1_je8y3p7 wrote

>But, you can also convince it that 2+2=5 by telling it that is true enough times

The same is true for humans though, it's essentially what gaslighting is. Though if I can use a less malicious example, think of a colorblind person, how do they know grass is green? Everyone told them so.

1