master3243 t1_j67jwad wrote
Hinton says that it does not generalize as well on the toy problems he investigates. An algorithm not doing well on toy problems is often not a good sign. I predict that unless someone discovers a breakthrough, it will be worse than backprop despite operating faster (due to not having the bottlenecks as you suggested).
currentscurrents OP t1_j67lie8 wrote
I'm messing around with it to try to scale to a non-toy problem, maybe try to adapt it to one of the major architectures like CNNs or transformers. I'm not sitting on a ton of compute though, it's just me and my RTX 3060.
A variant paper, Predictive Forward-Forward, claims performance equal to backprop. They operate the model in a generative mode to create the negative data.
master3243 t1_j67mcbh wrote
> A variant paper, Predictive Forward-Forward
Interesting, I'll have to read it at a more convenient time.
Do share your results if they are promising/fruitful.
Grenouillet t1_j67wexx wrote
Thats very interesting, is there a way to follow your progress?
ch9ki7 t1_j688syx wrote
would be interested as well do you have your progress on GitHub or something?
Viewing a single comment thread. View all comments