Submitted by Lost-Parfait568 t3_xtxe6f in MachineLearning
seba07 t1_iqshjec wrote
And the "results are 0.x% better" papers are often about challenges that aren't interesting anymore since many years.
Hamoodzstyle t1_iqsmn0k wrote
Also do l don't forget, no ablation study so that it's impossible to know which of the tiny changes actually helped.
jturp-sc t1_iqvcptz wrote
Most of them are really just CV padding to some 1st or 2nd year grad student. If you look into them more, it's usually just as trivial as being the first to publish a paper about using a model that came out 12 months ago on a less common dataset.
It's really more about the grad student's advisor doing them a solid in terms of building their CV than actually adding useful literature to the world.
sk_2013 t1_iqwen2y wrote
Honestly I wish my advisor had done that.
My CS program was alright overall, but the ML professor used the same undergrad material for all his classes and I've kind of been left trying to put together functioning knowledge and a career myself.
Viewing a single comment thread. View all comments