Submitted by fujidaiti t3_10pu9eh in MachineLearning
This may be a silly question for those familiar with the field, but don't machine learning researchers expect any more prospects for traditional methods (I mean, "traditional" is other than deep learning)? I feel that most of the time when people talk about machine learning in the world today, they are referring to deep learning, but is this the same in the academic world? Have people who have been studying traditional methods switched to neural networks? I know that many researchers are excited about deep learning, but I am wondering what they think about other methods.
[ EDITED ]
I’m glad that I got far more responses than I expected! However, I would like to add here that my intention did not seem to come across to some people because of my inaccurate English.
I think “have given up" was poorly phrased. What I really meant to say was, are ML researchers no longer interested in traditional ML? Have those who studied, say, SVM moved on to DL field? That was my point, but u/qalis gave me a good comment on it. Thanks to all the others.
qalis t1_j6mczg1 wrote
Absolutely not! There is still still a lot of research going into traditional ML methods. For tabular data, it is typically vastly superior to deep learning. Especially boosting models receive a lot of attention due to very good implementations available. See for example:
- SketchBoost, CuPy-based boosting from NeurIPS 2022, aimed at incredibly fast multioutput classification
- A Short Chronology Of Deep Learning For Tabular Data by Sebastian Raschka, a great literature overview of deep learning on tabular data; spoiler: it does not work, and XGBoost or similar models are just better
- in time series forecasting, LightGBM-based ensembles typically beat all deep learning methods, while being much faster to train; see e.g. this paper, you can also see it at Kaggle competitions or other papers; my friend works in this area at NVidia and their internal benchmarks (soon to be published) show that top 8 models in a large scale comparison are in fact various LightGBM ensemble variants, not deep learning models (which, in fact, kinda disappointed them, since it's, you know, NVidia)
- all domains requiring high interpretability absolutely ignore deep learning at all, and put all their research into traditional ML; see e.g. counterfactual examples, important interpretability methods in finance, or rule-based learning, important in medical or law applications