starfries

starfries t1_jdyx0xh wrote

I feel like Eliezer Yudkowsky proves that everyone can be Eliezer Yudkowsky, going from a crazy guy with a Harry Potter fanfic and a blog to being mentioned in your post alongside those other two names.

5

starfries t1_j87ypnt wrote

Maybe it's a difference in fields. I rarely see people do meta-analysis in ML so it didn't strike me as odd. Most of the reviews are just "here's what people are trying" with some attempt at categorization. But I see what you mean now, it makes sense that having a meta-analysis is important in medical fields where you want to aggregate studies.

2

starfries t1_j87r1js wrote

I have definitely seen the kind of papers you're talking about, but this one seems fine to me? Granted I skimmed it really quickly but the title says it's a review article and the abstract reflects that.

As an aside: I really like the format I see in bio fields (and maybe others, but this is where I've encountered it) of putting the results before the detailed methodology. It doesn't always make sense for a lot of CS papers where the results are the most boring part (essentially being "it works better") but where it does it leads to a much better paper in my opinion.

3

starfries t1_j6l0aeq wrote

Thanks for that resource, I've been experimenting with the lottery ticket method but that's a lot of papers I haven't seen! Did you initialize the weights as if training from scratch, or did you do something like trying to match the variance of the old and new weights? I'm intrigued that your method didn't hurt performance - most of the things I've tested were detrimental to the network. I have seen some performance improvements under different conditions but I'm still trying to rule out any confounding factors.

1