Submitted by GraciousReformer t3_118pof6 in MachineLearning
hpstring t1_j9jb96f wrote
Universal approximation is not enough, you need efficiency to make things work.
DL is the only class of algorithms that beats the curse of dimensionality for discovering certain (very general) class of high dimensional functions(something related to Barron space). Point me out if this is not accurate.
inspired2apathy t1_j9jsbz6 wrote
It's that entirely accurate though? There's all kinds of explicit dimensionally reduction methods. They can be combined with traditional ml models pretty easily for supervised learning. As I understand, the unique thing DL gives us just a massive embedding that can encode/"represent" something like language or vision.
hpstring t1_j9jxzpm wrote
Well the traditional ml + dimensionality reduction cannot crack e.g. imagenet recognition
inspired2apathy t1_j9jzpqw wrote
Other models like PGMs can absolutely be applied to ImageNet, just not for SOTA accuracy.
MuonManLaserJab t1_j9k3bcn wrote
They did say "crack", not "attempt".
GraciousReformer OP t1_j9ji7t1 wrote
But why DL beats the curse? Why is DL the only class?
hpstring t1_j9juk1f wrote
Q1: We don't know yet. Q2: Probably there are other classes but they haven't been discovered or are only at the early age of research.
NitroXSC t1_j9k09wt wrote
> Q2: Probably there are other classes but they haven't been discovered or are only at the early age of research.
I think there are many different classes that would work but current DL is based in large parts on matrix-vector operations which can be implemented efficiently on current hardware.
[deleted] t1_j9jveco wrote
[deleted]
Viewing a single comment thread. View all comments