Submitted by kmtrp t3_xv8ldd in singularity
red75prime t1_ir25nxr wrote
Reply to comment by MurderByEgoDeath in What happens in the first month of AGI/ASI? by kmtrp
Memory, processing power, simplified access to your own hardware, ability to construct much more complex mental representations.
Feynman said once that when he solves a problem he constructs a mental representation (a ball that grows spikes, spikes become multicolored, something like that) that captures conditions of a problem and then he can just see the solution. Imagine that you can visualize 10-dimentional manifold that changes its colors (in color space with six primary colors).
Yep, scientists are probably able to painstakingly construct layer after layer of intuitions that will allow them to make sense of AI's result, which it simply had seen. But along universality there's efficiency. Three-layer neural network is an universal approximator, but it's terribly inefficient at learning.
MurderByEgoDeath t1_ir276sm wrote
I totally grant that, but it's important to note that we still haven't even come close to hitting the limits of our understanding. Which is to say, any extra memory and processing power we've needed to understand anything, we've been very good at offloading to external systems, as with our computers.
Viewing a single comment thread. View all comments