Submitted by AutoModerator t3_11pgj86 in MachineLearning
andrew21w t1_jdcb0vo wrote
Why nobody uses polynomials as activation functions?
My mere perception is that polynomials are the best since they can approximate nearly any kind of function you like? So they're perfect....
But why aren't they used?
dwarfarchist9001 t1_jdd33ha wrote
Short answer: Polynomials can have very large derivatives compared to sigmoid or rectified linear functions which leads to exploding gradients.
https://en.wikipedia.org/wiki/Vanishing_gradient_problem#Recurrent_network_model
underPanther t1_jddpryu wrote
Another reason: wide single-layer MLPs with polynomials cannot be universal. But lots of other activations do give universality with a single hidden layer.
The technical reason behind this is that non-discriminatory discriminatory activations can give universality with a single hidden layer (Cybenko 1989 is the reference).
But polynomials are not discriminatory (https://math.stackexchange.com/questions/3216437/non-trivial-examples-of-non-discriminatory-functions), so they fail to reach this criterion.
Also, if you craft a multilayer percepteron with polynomials, does this offer any benefit over fitting a Taylor series directly?
andrew21w t1_jde4ayx wrote
The thread you sent me says that polynomials are non discriminatory.
Are there other kinds of functions that are non discriminatory?
underPanther t1_jdeofve wrote
Sorry for the confusion! It's discriminatory activations that lead to universality in wide single layer networks. I've editted post to reflect this.
As an aside, you might also find the following interesting which is also extremely well-cited: https://www.sciencedirect.com/science/article/abs/pii/S0893608005801315
Viewing a single comment thread. View all comments