Submitted by RobbinDeBank t3_z2hr3p in MachineLearning
alwayslttp t1_ixj9zpy wrote
Reply to comment by DigThatData in [D] Schmidhuber: LeCun's "5 best ideas 2012-22” are mostly from my lab, and older by RobbinDeBank
All metrics are stacked massively in favour of first level citations - many entirely ignore second level and beyond. For example, a paper's "cited by" count is its most prominent metric of influence/importance, and is a count of how many papers directly cite it.
I don't know this particular beef, but it sounds like citing GRU and not LSTM is a potential sleight/insult here. Exactly the kind of thing you see in petty academic rivalries. You're explicitly deciding who you're crediting with the key innovations you're building from, and you know that most people aren't chasing every sub reference of every citation.
DigThatData t1_ixkghj5 wrote
sounds like the problem here is the metrics then. which also is something I'm pretty sure only even became a thing extremely recently. For a long time, the only citation-based metric anyone talked about was their Erdos number, which was a tongue-in-cheek thing anyway. Concern over metrics like this is more likely than not going to damage research progress by encouraging gamification. The only "cited by" count I ever concern myself with is for sorting stuff on google scholar, which I never presume is an exact count or directly maps to the sorting I really need.
Viewing a single comment thread. View all comments