Submitted by Tea_Pearce t3_10aq9id in MachineLearning
chimp73 t1_j45vsgb wrote
Bitter lesson 3.0: The entire idea of fine-tuning on a large pre-trained model goes out of the window when you consider that the creators of the foundation model can afford to fine-tune it even more than you because fine-tuning is extremely cheap for them and they have way more compute. Instead of providing API access to intermediaries, they can simply sell services to the customer directly.
hazard02 t1_j46e13z wrote
I think one counter-argument is that Andrew Ng has said that there are profitable opportunities that Google knows about but doesn't go after simply because they're too small to matter to Google (or Microsoft or any megacorp), even though those opportunities are large enough to support a "normal size" business.
From this view, it makes sense to "outsource" the fine-tuning to businesses that are buying the foundational models because why bother with a project that would "only" add a few million/year in revenue?
Additionally, if the fine-tuning data is very domain-specific or proprietary (e.g. your company's customer service chat logs for example) then the foundational model providers might literally not be able to do it.
​
Having said all this, I certainly expect a small industry of fine-tuning consultants/tooling/etc to grow over the coming years
Nowado t1_j46klvj wrote
From this perspective you could say there are products that wouldn't make sense for Amazon to bother with. How's that working out.
hazard02 t1_j46mbb6 wrote
Edit:
OK I had a snarky comment here, but instead I'd like to suggest that the business models are fundamentally different: Amazon sells products that they (mostly) don't produce, and offers a platform for third-party vendors. In contrast to something like OpenAI, they're an aggregator and an intermediary.
ThirdMover t1_j46t3fc wrote
I think the point of the metaphor was Amazon stealing product ideas from third party vendors on their site and undercutting them. They know what sells better than anyone and can then just produce it.
If Google or OpenAI offers people the opportunity to finetune their foundation models they will know when something valuable comes out of it and simply replicate it then. There is close to zero institutional cost for them to do so.
That's a reason why I think all these startups that want to build business models around ChatGPT are insane: if you do it and it actually turns out to work OpenAI will just steal your lunch and you have no way of stopping that.
Nowado t1_j4723n6 wrote
That was precisely the point.
Amazon started as a sales service and then moved to become platform. Once it was platform, everyone assumed that sales business was too small for them.
And then they started to cannibalize businesses using their platform.
GPT-5entient t1_j4s8q64 wrote
>I think the point of the metaphor was Amazon stealing product ideas from third party vendors on their site and undercutting them. They know what sells better than anyone and can then just produce it.
In many cases they are probably just selling the same white label item outright, just slapping on "Amazon Basics"...
Phoneaccount25732 t1_j477kis wrote
The reason Google doesn't bother is that they are aggressive about acquisitions. They're outsourcing the difficult risky work.
L43 t1_j45wbf1 wrote
Yeah I have a pretty dystopian outlook on the future because of this.
thedabking123 t1_j46pulo wrote
the one thing that could blow all this up is requirements for explainability; which could push the industry into low cost (but maybe low performance) methods like neurosymbolic computing whose predictions are much more understandable and explainable
I can see something to do with self driving cars (or LegalTech, or HealthTech) that results in a terrible prediction with real consequences. This would then drive the public backlash against unexplainable models, and maybe laws against them too.
Lastly this would then make deep learning models and LLMs less attractive if they fall under new regulatory regimes.
DisWastingMyTime t1_j47ans8 wrote
In vision/robotics this is already the case, low hardware/liw cost requirements is an incredible seller for automotive industry, so large disgusting models are out.
But we still use deep, if anything it's pretty surprising how much is possible with "shallow" models, for specialized domains, but thats still very far from explainable models
fullouterjoin t1_j4vbawe wrote
> requirements for explainability
We have to start pushing for this legislation now. If you leave it up to the market, Equifax will just make a magic Credit Score model that will be like huffing tea leaves.
RomanRiesen t1_j46ixvh wrote
Counter point: markets that are small and specialised and require tons of domain knowledge. E.g. training the model on israeli law in hebrew.
Smallpaul t1_j4a0daf wrote
How many team members would it take ChatLawGPT and feed it tons of Hebrew content? Isn't the whole point that it can learn domain knowledge?
ghostfuckbuddy t1_j46eikm wrote
The compute is cheap but the data may not be easily accessible.
granddaddy t1_j47hbby wrote
This guy makes a similar comparison in his blog but goes into a bit more detail than the tweet.
https://trees.substack.com/p/false-dichotomy-and-disillusion-in
Is it worth creating your own models or extensively fine-tuning foundational models? Probably not.
weightloss_coach t1_j4a2sx8 wrote
It’s like saying that creators of database will create all SaaS products
For end user, many more things matter
[deleted] t1_j4b0gez wrote
[deleted]
make3333 t1_j47zeza wrote
& often don't even need to fine tune because of instruction pre training and few shot prompting
pm_me_your_pay_slips t1_j48741u wrote
The bitter lesson will be when fine-tuning and training from scratch become the same thing.
Arktur t1_j48rwm7 wrote
That’s not bitter lesson, that’s just Capitalism.
sabetai t1_j49eq10 wrote
API devs haven't been able to use GPT3 effectively, and will likely be competed away by more product-like releases like ChatGPT.
Viewing a single comment thread. View all comments