Viewing a single comment thread. View all comments

jimmymvp t1_j7yubak wrote

A pretty famous stats professor once told me that he should've switched to ML a long time ago. Now he does ML research, obviously very rigorous. He said that stats is making up questions that are to a large extent not practically useful.

8

AdFew4357 t1_j7ztran wrote

Stats is finding interpretable ways to look at and mode data that ML plug and chug cs people don’t do

−2

jimmymvp t1_j806dx2 wrote

Just communicating what I've heard. Nevertheless, I think the whole interpretable ML community (at the very least) would disagree with you on this one :). Reducing ML to "plug and chug" is well... Speaks for itself :D

3

AdFew4357 t1_j806plm wrote

The whole landscape of ML research is a hunt to chase SOTA by tweaking an architecture here or using a different optimizer there and then squeezing out 0.2% accuracy on some well known imaging dataset in an attempt to churn out papers. That’s not science if you ask me.

−1

jimmymvp t1_j83v503 wrote

I'm not sure if you have a good overview of ML research if this is your claim. Sounds like you've read too many blog posts on transformers. I'd suggest going through some conference proceedings to get a good overview, there's some pretty rigorous (not just stats) stuff out there. I agree though that there is a substantial subset of research in ML that works towards tweaking and pushing the boundaries of what can be achieved with existing methods, which is for me personally exciting to see! A lot of cool stuff came out of scaling up and tweaking the architectures.

2