PeedLearning
PeedLearning t1_ivbajse wrote
Reply to comment by [deleted] in [D] Simple Questions Thread by AutoModerator
Matrix completion is part of it, also determinant processes, reinforcement learning, clustering, ...
PeedLearning t1_iuwdj73 wrote
Reply to comment by Mefaso in [D] What are the benefits of being a reviewer? by Signal-Mixture-4046
[citation needed]
Some are non-profit, but for example most of the IEEE conferences are for-profit... Another common theme is conference non-profits subcontracting a lot of organizational aspects to for-profits.
PeedLearning t1_iu4r6gf wrote
Reply to [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process? by DisWastingMyTime
No, never used them
PeedLearning t1_irdfrn4 wrote
Reply to comment by carlml in [Discussion] Best performing PhD students you know by Light991
I am not sure what you would consider SOTA in few-shot RL. The benchmarks I know are quite ad-hoc and don't actually impact much outside of computer science research papers.
The people that work on applying RL for actual applications don't seem to use meta-RL.
PeedLearning t1_irbosst wrote
Reply to comment by carlml in [Discussion] Best performing PhD students you know by Light991
(I have published myself in the meta-learning field, and worked a lot on robotics)
I see no applications of meta learning appearing, outside of self-citations within the field. The SOTA in supervised learning doesn't use any meta-learning. The SOTA in RL neither. The promise of learning to learn never really came true...
... until large supervised language models seemed to suddenly meta-learn as an emergent property.
So not only did nothing in the meta-learning field really take off and had some impact outside of computer science research papers, its original reason of being has been subsumed by a completely different line of research.
Meta-learning is no longer a goal, it's understood to be a side-effect of sufficiently large models.
PeedLearning t1_ir70vku wrote
Reply to comment by fromnighttilldawn in [Discussion] Best performing PhD students you know by Light991
Hundreds of Phd students were in their positions. Few made such an impact.
It's not easy to be impactful, even with a good supervisor. In Krizhevsky's case, one could even argue he had a big impact, despite having Hinton as a supervisor. Alexnet was kind of built behind Hinton's back as he didn't approve of the research direction. Hinton did turn around later and recognize the importance though.
PeedLearning t1_ir48kps wrote
Reply to comment by Light991 in [Discussion] Best performing PhD students you know by Light991
Yes, MAML is on top. But I don't think it has been very impactful, neither has the whole field of meta-learning really been.
PeedLearning t1_ir2ar52 wrote
Reply to comment by Light991 in [Discussion] Best performing PhD students you know by Light991
Any concrete papers you have in mind?
PeedLearning t1_ir29bk8 wrote
Chelsea Finn? I knew very little people who were using MAML during her Phd, and even fewer after.
I reckon e.g. Ian Goodfellow had a lot of impact during his Phd. Alex Krizhevsky is another name with big impact.
PeedLearning t1_j97x4yf wrote
Reply to comment by Villad_rock in What’s up with DeepMind? by BobbyWOWO
no, basically nothing in terms of fundamental research