RSchaeffer
RSchaeffer t1_izgxqod wrote
Reply to comment by VirtualHat in [D] Workflows for quickly iterating over ideas without free access to super computers by [deleted]
These links don't work for me. Can you double check them?
RSchaeffer t1_iwiq0gq wrote
Reply to [P] 🔥 CleanRL has reached v1.0.0; Reworked documentation, JAX support, and more! by vwxyzjn
Where's a good place to learn about the landscape of different RL libraries and understand how CleanRL compares to them?
RSchaeffer t1_ivuizy4 wrote
Reply to [Discussion] Can we train with multiple sources of data, some very reliable, others less so? by DreamyPen
The topic you're looking for is "weak supervision."
RSchaeffer t1_it4gb7v wrote
Reply to comment by ml_magic_ in [Research] Scholars Program by ml_magic_
Can people apply if they have ML publications but no NLP publications, and want to transition?
RSchaeffer t1_jb26p98 wrote
Reply to [R] [N] Dropout Reduces Underfitting - Liu et al. by radi-cho
Lucas Beyer made a relevant comment: https://twitter.com/giffmana/status/1631601390962262017
"""
​
The main reason highlighted is minibatch gradient variance (see screenshot).
This immediately asks for experiments that can validate or nullify the hypothesis, none of which I found in the paper
​
"""