Viewing a single comment thread. View all comments

Cryptizard t1_j1hfn4j wrote

Reply to comment by Ortus12 in Hype bubble by fortunum

Here is where it becomes obvious that you don’t understand how LLMs work. They have a fixed depth evaluation circuit, which means that they take the same amount of time to respond to the prompt 2+2=? as they do to “simulate this complex protein folding” or “break this encryption key”. There are fundamental limits on the computation that a LLM can do which prevent it from being ASI. In CS terms, anything which is not computable by a constant depth circuit (many important things) cannot be computed by a LLM.

7

YesramDeens t1_j1jzogr wrote

What are these “many important things”?

1

Cryptizard t1_j1k30q3 wrote

Protein folding, n-body simulation, really any type of simulation, network analysis, anything in cryptography or that involves matrices. Basically anything that isn’t “off the top of your head” and requires an iterative approach or multiple steps to solve.

1

Argamanthys t1_j1hpxay wrote

Accurate right up until someone says 'think it through step by step'.

−1

Cryptizard t1_j1hvl85 wrote

Except no, because they currently scale quadratically with the number of “steps” they have to think. Maybe we can fix that but it’s not obvious that it is possible to fix without a completely new paradigm.

1