gratus907

gratus907 t1_iqxcyg9 wrote

From the edited question I can see that you understand that it is possible but unnecessary, yet you are still wondering whether it has any drawbacks or inertia.

One thing to consider is that modern deep learning relies on very efficient parallel hardwares such as GPUs. They are usually made to carry out simple instructions in a massively parallel manner. A widely known metaphor is that CPU is several highly educated people, while GPU is thousand of ten year olds. If it is the best we have, we as well may should utilize what they can do - performing simple instructions(matrix mult etc).

If using polynomial neurons or such stuff has added benefits such as extending theoretical results, it might have been considered. However, we have Univ. Approx Thm and rich results that makes such effort less exciting.

So yes, I think if you can somehow design some next generation GPU which can evaluate cubic polynomials extremely fast, by all means ML will learn to adapt using cubics as building blocks.

19