blimpyway

blimpyway t1_j4pndcs wrote

One application I can think of is learning on edge. There is an industry fashion to embed AI inference capabilities in the newer ARM chips. The so called NPUs. Which are simplified GPUs optimized only for inference (forward passes). Such an algorithm would enable them to learn using only forward passes, hence without requiring backpropagation.

Another possibility I think is ability to train one layer at a time, which diminishes GPU memory requirements.

And probably more important it opens the gates for all kind of not yet seen network architectures, topologies and training methods that do not require fully differentiable pathways.

edit: regarding the brain inspired part.. well you can dismiss it as AI's reversed cargo cult - if it imitates some properties of the brain it should act like the brain, but I would be cautious to attribute Hinton this kind of thinking. Brains are very different from ANNs and trying to emulate their properties could provide insights on how they work.

3

blimpyway t1_ivjly2a wrote

This indeed could be one case. However a couple hundred attempts is not the limit - a kid would get it in less than a couple dozen trials or she will get bored.

However I found that some models can do it even faster. Like under 5 failures or less on 50% trials, including only 2 failures in 5% of trials.

1

blimpyway t1_ivivcwr wrote

Tesla collected 780M miles of driving till 2016

A human learning to drive for 16h/day at an average speed of 30mph for 18years would have a data set of ~3M miles.

So we can say humans are at least 1000 times more sample efficient than whatever Tesla and any other autonomous driving companies are doing.

−1