Viewing a single comment thread. View all comments

currentscurrents t1_jajpjj7 wrote

It's not dead, but gradient-based optimization is more popular right now because it works so well for neural networks.

But you can't always use gradient descent. Backprop requires access to the inner workings of the function, and requires that it be smoothly differentiable. Even if you can use it, it may not find a good solution if your loss landscape has a lot of bad local minima.

Evolution is widely used in combinatorial optimization problems, where you're trying to determine the best order of a fixed number of elements.

69

Hostilis_ t1_jak681p wrote

>But you can't always use gradient descent. Backprop requires access to the inner workings of the function

Backprop and gradient descent are not the same thing. When you don't have access to the inner workings of the function, you can still use stochastic approximation methods for getting gradient estimates, e.g. SPSA. In fact, there are close ties between genetic algorithms and stochastic gradient estimation.

33