Submitted by Tea_Pearce t3_10aq9id in MachineLearning
bloc97 t1_j49ft0g wrote
Reply to comment by mugbrushteeth in [D] Bitter lesson 2.0? by Tea_Pearce
My bet is on "mortal computers" (term coined by Hinton). Our current methods to train Deep Nets are extremely inefficient. CPU and GPUs basically have to load data, process it, then save it back to memory. We can eliminate this bandwidth limitation by printing basically a very large differentiable memory cell, with hardware connections inside representing the connections between neurons, which will allow us to do inference or backprop in a single step.
Viewing a single comment thread. View all comments