Viewing a single comment thread. View all comments

Ortus12 t1_j1gewqe wrote

LLM only need scale to become ASI. That is, intelligent enough to design machines, write code, develop theories, and come up with insights, better than any human. The LLM itself will be able write the code for other ASIs that are even more powerful, with working memory.

LLMs aren't learning from a single person, but they are reverse engineering the thought patterns of all of humanity, including all of the scientists and mathematicians who have ever written books or research papers, all the programmers who have ever posted their code, or helped solve a programming problem, all the poets, and even all the artists (google already connected their LLM with their imagen to get an intelligence that's better at both tasks and tasks combining both).

It's the opposite. People don't understand how close we are to the singularity.

15

dookiehat t1_j1gtna8 wrote

LLMs, while undeniably useful and interesting do not have intentions, and only respond to input.

Moreover, it is important to remember that Large Language models are only trained on text data. There is no other data to contextualize what it is talking about. As a user of a large language model, you see coherent “thoughts” then you fill in the blanks of meaning with your sensory knowledge.

So an iguana eating a purple apple on a thursday means nothing to a large language model except the words’ probablistic relationship to one another. Even if this is merely reductionist thinking, i am still CERTAIN that a large language model has no visual “understanding” of the words. It has only contextual relationships within its model and is devoid of any content that it is able to reference and understand meaning

13

SurroundSwimming3494 t1_j1h5w35 wrote

>People don't understand how close we are to the singularity.

But you don't know that for a fact, though. I don't know why some people act as if they know for a fact what the future holds. It's one thing to believe it's close, but to claim that you know the singularity is close (which is what it seems you're doing in your comment) comes off as pretty arrogant.

7

Mr_Hu-Man t1_j1hjr04 wrote

I agree with this point of view. Anyone that claims anything with absolute certainty is spouting BS

2

Cryptizard t1_j1hfn4j wrote

Here is where it becomes obvious that you don’t understand how LLMs work. They have a fixed depth evaluation circuit, which means that they take the same amount of time to respond to the prompt 2+2=? as they do to “simulate this complex protein folding” or “break this encryption key”. There are fundamental limits on the computation that a LLM can do which prevent it from being ASI. In CS terms, anything which is not computable by a constant depth circuit (many important things) cannot be computed by a LLM.

7

YesramDeens t1_j1jzogr wrote

What are these “many important things”?

1

Cryptizard t1_j1k30q3 wrote

Protein folding, n-body simulation, really any type of simulation, network analysis, anything in cryptography or that involves matrices. Basically anything that isn’t “off the top of your head” and requires an iterative approach or multiple steps to solve.

1

Argamanthys t1_j1hpxay wrote

Accurate right up until someone says 'think it through step by step'.

−1

Cryptizard t1_j1hvl85 wrote

Except no, because they currently scale quadratically with the number of “steps” they have to think. Maybe we can fix that but it’s not obvious that it is possible to fix without a completely new paradigm.

1