tripple13
tripple13 t1_j0tiuy3 wrote
Reply to comment by suflaj in [D] Will there be a replacement for Machine Learning Twitter? by MrAcurite
Wow, why do people downvote this?
Is it a must every researcher should be an activist?
What if you just want to be left in peace, doing the research you enjoy?
tripple13 t1_j0tim25 wrote
Honestly, and this might be a completely wrong bet here.
Nothing will change, ML Twitter will stay ML Twitter.
tripple13 t1_iy7s3vx wrote
Not at all.
There are in fact plenty of people who hack together something, a Frankenstein of OS-code in combination with other work found in githubs.
Its just hard to innovate if you're bound by the limits of your own understanding of whats going on. Not impossible, just immensely harder.
tripple13 t1_ixgqyyi wrote
To be fair, I understand your motivation, I've had similar reservations.
However, the amount of boilerplate code I've been writing (DDP, train/evals, tracking metrics etc.) has been shrunk by a huge amount after switching to Pytorch Lightning.
When you are measured by your efficiency in terms of hours spent, I'd definitely argue for simplifying things, rather than not.
tripple13 t1_iw7kfzy wrote
Well, daily.
But I do research.
While you can solve a lot of problems with out-of-the-box models and a bit of fine-tuning, solving problems in a new way often requires custom/new models.
tripple13 t1_ivqpwft wrote
Reply to comment by dasayan05 in [Discussion] Could someone explain the math behind the number of distinct images that can be generated with a latent diffusion model? by [deleted]
This.
I guess that's whats remarkably fascinating by these models.
Albeit, in essence, you are putting a prior on the training set, thus there should be some limit to the manifold from which samples are generated.
tripple13 t1_it6opvv wrote
Reply to comment by acdjent in [D] Accurate blogs on machine learning? by likeamanyfacedgod
+1 - Lilian Weng is straight 🔥
tripple13 t1_it1vs7l wrote
Reply to [D] is a strong background in math/stats/cs in a necessary condition for becoming a renowned researcher in the ML community? *A passive rant* by [deleted]
Being pedantic is at least not a pre-requisite.
Normalization is just centering and standardizing the data. Which these researchers are fully aware of.
Does that mean you suddenly transform Poisson distributed data into Gaussian? No.
Is it a big mistake to name it as such, ahhh, I don't know. Is it a measure of their mathematical ability? No definitely not.
Does it tell something about the level of pedacticity (i don't even know if that's a word) of the person? Maybe.
I'd argue becoming successful in this field you can go many ways, one of them may be very specific and T-person oriented (like measure-theory for instance), other ways may be more rounded and broad based. Whatever works for you.
tripple13 t1_j0ys5fk wrote
Reply to comment by Hyper1on in [D] Will there be a replacement for Machine Learning Twitter? by MrAcurite
Yup, agree completely with your second point. The user experience, state and design of Mastodon is substantially less appealing.
On the drama matters, I personally do not care much for this, trying to avoid it like the plague.