Viewing a single comment thread. View all comments

crouching_dragon_420 t1_ixilxbs wrote

That's total horseshit when the architecture in the paper is almost the same as the original LSTM. I'm not talking about modern papers. If they cite GRU, they should cite LSTM as well. I dont agree with the saying GRU cite LSTM so it's fine to cite GRU but not LSTM. That's shouldnt be how credit assignment work.

5

DigThatData t1_ixinfbc wrote

> If they cite GRU, they should cite LSTM as well.

that's not how citations work...

> GRU cite LSTM so it's fine to cite GRU but not LSTM.

but that's literally how citations work. If you cite paper X, you are implicitly citing everything that paper X cited as well. citation graphs are transitive.

1

new_name_who_dis_ t1_ixiofup wrote

Yea exactly. If you’re citing a paper you’re implicitly citing all of the papers that paper cited.

No one is citing the original perceptron paper even though pretty much every deep learning paper uses some form of a perceptron. Because the citation is implied going from more complex architectures cited, to simpler ones those cited, and so on until you get to perceptron.

6

alwayslttp t1_ixj9zpy wrote

All metrics are stacked massively in favour of first level citations - many entirely ignore second level and beyond. For example, a paper's "cited by" count is its most prominent metric of influence/importance, and is a count of how many papers directly cite it.

I don't know this particular beef, but it sounds like citing GRU and not LSTM is a potential sleight/insult here. Exactly the kind of thing you see in petty academic rivalries. You're explicitly deciding who you're crediting with the key innovations you're building from, and you know that most people aren't chasing every sub reference of every citation.

4

DigThatData t1_ixkghj5 wrote

sounds like the problem here is the metrics then. which also is something I'm pretty sure only even became a thing extremely recently. For a long time, the only citation-based metric anyone talked about was their Erdos number, which was a tongue-in-cheek thing anyway. Concern over metrics like this is more likely than not going to damage research progress by encouraging gamification. The only "cited by" count I ever concern myself with is for sorting stuff on google scholar, which I never presume is an exact count or directly maps to the sorting I really need.

1