Viewing a single comment thread. View all comments

No_Ninja3309_NoNoYes t1_j5knhnq wrote

The focus is currently on Deep Learning. So why would DL not bring AGI in its current form? First in simple terms, how does it work? The most common setup uses inputs and weights. The sum of the products is propagated. There are ReLus, batch normalisation, residual connections and all kinds of tricks in between. The outputs are checked against expected values. Weights are then updated to fit the expected outputs against given inputs.

There are multiple neural layers. That is why we speak of Deep Learning. So to use a crude analogy, imagine that you are the leader of a squad. Imagine that your soldiers understand 80% of your orders. Now imagine being the platoon leader. Imagine again that your squad leaders again understand 80% of your orders. How many of your orders reach your soldiers? Imagine having a hundred or more layers. Adding layers isn't free. And with almost all AI companies doing the same, we will run out of GPUs soon.

Also real neurons are more complicated than in the DL models. There are things like spiking, brain plasticity, neurotransmitters, and synapse plasticity that DL doesn't take into account. So the obvious solution is neuromorphic hardware and appropriate algorithms. It's anyone's guess when they will be ready.

5

red75prime t1_j5kx5ha wrote

Backpropagation is a tool that takes care of servicemen not getting the orders. There's the vanishing gradient problem affecting deep networks, but RELUs and residual connections seem to take care of it just fine. Mitigation of the problem in recurrent networks is harder though.

As for the brain... The brain architecture most likely is not the one and only architecture suitable for general intelligence. And taking into account that researchers get similar results when scaling up different architectures, there are quite a few of them.

6