Submitted by MichelMED10 t3_y9yuza in MachineLearning
m98789 t1_it8rax1 wrote
This was a popular approach early on to use a DNN essentially as a feature extractor, and then providing those features to a sophisticated classifier separately, such as a SVM. E.g., separate the process into two distinct steps.
Generally speaking, this approach fell out of favor when it became evident that “end to end” learning performed better. That is, you don’t just learn a feature extractor but also the classifier, together.
As the E2E approach took favor, folks did try to include more sophisticated approaches to the last layers to simulate various kinds of classical classifiers. Ultimately, it was found that a simple approach for the final layers yielded just as performant results.
MichelMED10 OP t1_it9d28r wrote
Yes but the idea is if we train the model end to end. In the end if we feed the extracted features to an XGboost, in the worst case the XGboost will perfrom as good as the classifier. And we will still trained our model end to end prior the the freezing of the encoder. So theorically it seems better
SlowFourierT198 t1_itbv7y2 wrote
XGBOOST is not strictly better than NNs in classification. I can guarantee you as I worked on a classification dataset where a NN performed significantly better then XGBOOST. While your statement will be correct for small sample datasets I am pretty sure, that it will not hold for large datasets with complicated features
Viewing a single comment thread. View all comments