Submitted by Dartagnjan t3_10ee9kp in MachineLearning
SetentaeBolg t1_j4qimm0 wrote
There are mathematical proofs of convergence for a single perceptron matching a linear classification, but for more realistic modern neural nets, I don't believe there are any proofs guaranteeing general convergence because I don't think convergence is actually guaranteed, for the reason pointed out, you can't be certain gradient descent will find the "right" minima.
Dartagnjan OP t1_j4qs8zx wrote
Thank for confirming my suspicions. Do you happen to have a reference for that case when optimizations methods influence optimization in such a way to inhibit convergence to some better set of minimas?
[deleted] t1_j5al8p3 wrote
[deleted]
Viewing a single comment thread. View all comments