Submitted by Tea_Pearce t3_10aq9id in MachineLearning
This twitter thread from Karol Hausman talks about the original bitter lesson and suggests a bitter lesson 2.0. https://twitter.com/hausman_k/status/1612509549889744899
"The biggest lesson that [will] be read from [the next] 70 years of AI research is that general methods that leverage foundation models are ultimately the most effective"
Seems to be derived by observing that the most promising work in robotics today (where generating data is challenging) is coming from piggy-backing on the success of large language models (think SayCan etc).
Any hot takes?
chimp73 t1_j45vsgb wrote
Bitter lesson 3.0: The entire idea of fine-tuning on a large pre-trained model goes out of the window when you consider that the creators of the foundation model can afford to fine-tune it even more than you because fine-tuning is extremely cheap for them and they have way more compute. Instead of providing API access to intermediaries, they can simply sell services to the customer directly.