Submitted by billjames1685 t3_youplu in MachineLearning
billjames1685 OP t1_ivg5bij wrote
Reply to comment by IntelArtiGen in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
Yeah I agree. Not sure if I’m misunderstanding you, but by “transfer learning” I basically mean like all of our pre training (which occurred through a variety of methods as you point out) has allowed us to richly understand images as a whole, so we can apply and generalize well in semi-new tasks/domain.
IntelArtiGen t1_ivg765c wrote
Ok that's one way to say it I also agree. I tend to not use the concept of "transfer learning" for how we learn because I think it's more appropriate for well-defined tasks and we are rarely confronted with tasks that are as well-defined as the ones we give to our models.
And transfer learning implies that you have to re-train a part of the model on a new task, and that's not exactly how I would define what we do. When I worked on reproducing how we learn words I instead implemented the solution as a way to put a new label on a representation we were already able to produce based on our unsupervised pretraining. I don't know which way is the correct one I just know that doing that works and that you can teach new words/labels to a model without retraining it.
billjames1685 OP t1_ivh33jr wrote
That’s a fair point; I was kind of just using it as a general term.
Viewing a single comment thread. View all comments