Submitted by Qwillbehr t3_11xpohv in MachineLearning
adventuringraw t1_jddte0k wrote
No one else mentioned this, so I figured I'd add that there's also much more exotic research going into low-power techniques that could match what we're seeing with modern LLMs. One of the most interesting areas to me personally, is that there's been recent progress in spiking neural networks, an approach much more inspired by biological intelligence. The idea, instead of continuous parameters sending vectors between layers, you've got spiking neurons sending sparse digital signals. Progress historically has been kind of stalled out since they're so hard to train, but there's been some big movement just this month actually, with spikeGPT. They basically figured out how to leverage normal deep learning training. That along with a few other tricks got something with comparable performance to an equivalently sized DNN, with 22x reduced power consumption.
The real promise of SNNs though, in theory you could develop large scale specialized 'neuromorphic' hardware... what GPUs and TPUs are for traditional DNNs, meant to optimally run SNNs. A chip like that could end up being a cornerstone of efficient ML, if things end up working out that way, and who knows? Maybe it'd even open the door to tighter coupling and progress between ML and neuroscience.
There's plenty of other things being researched too of course, I'm nowhere near knowledgeable enough to give a proper overview, but it's a pretty vast space once you start looking at more exotic research efforts. I'm sure carbon nanotube or superconductor based computing breakthroughs would massively change the equation for example. 20 years from now, we might find ourselves in a completely new paradigm... that'd be pretty cool.
Viewing a single comment thread. View all comments