Submitted by theanswerisnt42 t3_10wtumf in MachineLearning
EyeSprout t1_j7sqjzc wrote
CNNs and some very early optimizations for them that used to be kind of useful but are no longer really needed anymore since our computers are now faster (like Gabor functions) are sort of inspired from neuroscience research. Attention mechanisms were also floating around for quite a bit in neuroscience in models of memory and retrieval before it was sort of streamlined and simplified into the form we see today.
In general, when things go from neuroscience to machine learning, it takes a lot of stripping down of things into the actually relevant and useful components before they become actually workable. Neuroscientists have lot of ideas for mechanisms, but not all of them are useful...
Viewing a single comment thread. View all comments