Submitted by GraciousReformer t3_118pof6 in MachineLearning
BoiElroy t1_j9ipbtg wrote
This is not the answer to your question but one intuition I like about universal approximation theorem I thought I'd share is the comparison to a digital image. You use a finite set of pixels, each that can take on a certain set of discrete values. With a 10 x 10 grid of pixels you can draw a crude approximation of a stick figure. With 1000 x 1000 you can capture a blurry but recognizable selfie. Within the finite pixels and the discrete values they can take you can essentially capture anything you can dream of. Every image in every movie ever made. Obviously there are other issues later like does your models operational design domain match the distribution of the training domain or did you just waste a lot of GPU hours lol
GraciousReformer OP t1_j9jh7zm wrote
Yes a very finite grid size will approximate any digital image. But this is an approximation of an image in grids. How will it lead to approximation by NN?
Viewing a single comment thread. View all comments