Submitted by fortunum t3_zty0go in singularity
Cryptizard t1_j1hfn4j wrote
Reply to comment by Ortus12 in Hype bubble by fortunum
Here is where it becomes obvious that you don’t understand how LLMs work. They have a fixed depth evaluation circuit, which means that they take the same amount of time to respond to the prompt 2+2=? as they do to “simulate this complex protein folding” or “break this encryption key”. There are fundamental limits on the computation that a LLM can do which prevent it from being ASI. In CS terms, anything which is not computable by a constant depth circuit (many important things) cannot be computed by a LLM.
YesramDeens t1_j1jzogr wrote
What are these “many important things”?
Cryptizard t1_j1k30q3 wrote
Protein folding, n-body simulation, really any type of simulation, network analysis, anything in cryptography or that involves matrices. Basically anything that isn’t “off the top of your head” and requires an iterative approach or multiple steps to solve.
Argamanthys t1_j1hpxay wrote
Accurate right up until someone says 'think it through step by step'.
Cryptizard t1_j1hvl85 wrote
Except no, because they currently scale quadratically with the number of “steps” they have to think. Maybe we can fix that but it’s not obvious that it is possible to fix without a completely new paradigm.
Viewing a single comment thread. View all comments