Viewing a single comment thread. View all comments

hpstring t1_j9jb96f wrote

Universal approximation is not enough, you need efficiency to make things work.

DL is the only class of algorithms that beats the curse of dimensionality for discovering certain (very general) class of high dimensional functions(something related to Barron space). Point me out if this is not accurate.

54

inspired2apathy t1_j9jsbz6 wrote

It's that entirely accurate though? There's all kinds of explicit dimensionally reduction methods. They can be combined with traditional ml models pretty easily for supervised learning. As I understand, the unique thing DL gives us just a massive embedding that can encode/"represent" something like language or vision.

6

hpstring t1_j9jxzpm wrote

Well the traditional ml + dimensionality reduction cannot crack e.g. imagenet recognition

9

inspired2apathy t1_j9jzpqw wrote

Other models like PGMs can absolutely be applied to ImageNet, just not for SOTA accuracy.

−3

GraciousReformer OP t1_j9ji7t1 wrote

But why DL beats the curse? Why is DL the only class?

4

hpstring t1_j9juk1f wrote

Q1: We don't know yet. Q2: Probably there are other classes but they haven't been discovered or are only at the early age of research.

13

NitroXSC t1_j9k09wt wrote

> Q2: Probably there are other classes but they haven't been discovered or are only at the early age of research.

I think there are many different classes that would work but current DL is based in large parts on matrix-vector operations which can be implemented efficiently on current hardware.

10