Submitted by 4bedoe t3_yas9k0 in MachineLearning
hellrail t1_itdlgvb wrote
Reply to comment by Real_Revenue_4741 in [D] What things did you learn in ML theory that are, in practice, different? by 4bedoe
Ok found the right one.
Well, generally i must say good example. I accepted it at least as a very interesting example to talk about, worth mentioning in this context.
Nevertheless, its still valid for all NON cnn, resnet, transformer models.
Taking into account, that its based on an old theory (prior 1990), where these deep networks have not existed yet, one might take into account its limitedness (as it doesnt try to model effects taking place during learning of such complex deep models, which hasnt been a topic back then).
So if I would be really mean, i would say u cant expect a theory making predictions about entities (in this case modern deep networks) that had not been invented yet. One could say that the v-dim theory's assumptions include the assumption of a "perfect" learning procedure (therefore exclude any dynamic effects from the learning procedure), which is still valid for decision trees, random forrest, svms, etc, which have their relevance for many problems.
But since im not that mean, i admit that this observations in these modern networks do undermine the practicability of the V-dimension view for modern deep networks of the mentioned types, and that must have been a mediocre surprise before having tried out if v-dims work for cnn/resnet/transformers, therefore good example.
Viewing a single comment thread. View all comments