Submitted by mjrossman t3_11ws42u in Futurology

TL;DR - LLM finetuning costs just dropped precipitously with Alpaca. The same has yet to manifest for other model frameworks (linked below), but assumedly they will. the recent job impact paper (linked below) shows which jobs are most exposed, but what it doesn't reveal is the ease by which firms and broader, multiply-staged enterprises self-organize and compete on this basis.

there's been a lot of recent developments, obviously the most pressing trend has been AI's societal impact. if anyone's read the recent job impact paper, one of the factors that jumped out was the exposure of blockchain engineering to AI-based automation. whether this paper is logically sound is worth debating on its own, we should explore the significance of an effectively automated, self-regulating computing superstructure in simpler terms, and compare it to other exposed software impacts in the same scope.

most web-based software has fallen on a spectrum: fully obscure cloud applications with some unknown scope of human supervision, all the way to fully self-forking codebases with open request for comment. if you take something like wikipedia or reddit and apply it to this spectrum, you might guess that such platforms fall closer to the open end than the obscure end. now consider StableDiffusion/LLaMa/Alpaca vs Midjourney/ChatGPT. I can't understate this enough: if we demonstrate the economics favor the open end (as can be observed with recent LLM developments), then we have to extend this line of reasoning to the downstream markets.

all human jobs are either self-supervised or fall in a hierarchy of supervisors, and this determines the labor market. in fact, the biggest argument against "robots taking our jobs" has been macroeconomics of how cheap human labor can be. and entire governments make bets on this idea. in case anyone is interested, this is a recent high-level perception of "employer" vs "non-employer" firms: https://cdn.advocacy.sba.gov/wp-content/uploads/2020/11/05122043/Small-Business-FAQ-2020.pdf

this begs the question, what is the free market value for executive function? I mean, really, in stark terms, how expensive is it to form a de facto union within an employer firm, or even incorporate multiple partners into a limited liability small business model? if someone makes the claim: "we, a team of 5, have the means to deploy the corporate equivalent of a unicorn with 1/1000th the capital cost", the corollary that should be asked: how much the public market is willing to spend on an incumbent firm that spends millions to compensate a CEO and employs thousands at "competitive" rates? what minimum of private equity is needed to guarantee market capture for a startup? I guarantee you those margins trend to zero over years, if not months. the commercial/shareholder landscape is about to become extremely interesting.

but back to the point: what does a highly exposed, self-regulating software ecosystem mean on its own terms? why are Alpaca & Langchain so significant in this context? the answer is how catalyzed dogfooding becomes over a matter of months. the cost of finetuning a small, "run on your Raspberry Pi" LLM on any subdivision of knowledge (especially codebases) just dropped to retail levels. the next cost that drops is discovery of high-level SOPs with low-level daisy-chaining of these diverse models. and given the preexisting, battletested examples of n-tiered application architecture on blockchains, the marginal cost of smart policy development, testing, and auditing also drops as well over the next few years. that's the market for arbitrary executive function of any group of market participants. with respect to ML frameworks like sparsely-gated MoE, world models, multimodality, and adaptive agents: we won't see how the shoe drops if the costs haven't met the critical threshold, but it should be clear that we can assume they will and guess as to when they drop to that threshold. and I haven't even described the potential impact of Learning@Home in that respect.

if anyone has heard of the "Mechanical Turk", it's a way in which humans can cooperate to falsely appear as a complex mechanism. as Charlie Munger says, "show me the incentives [or costs], and I'll show you the outcome". It's not about how AI displaces us, it's about what AI compels us to freely displace. at the end of the day, the only vibe that matters is the potential impact of any given tinkering in any given garage, literally or metaphorically. and just like natural ecology, we can and should perceive that the marginal gains of obscure AI or a deliberately inefficient labor economy is going to be dwarfed by something open-source. just food for thought.

17

Comments

You must log in or register to comment.

mjrossman OP t1_jcziz7d wrote

the sober outlook is that whatever commodity the company offers is going to produce a slimmer margin over time, especially if it's digital. on the other hand, I'm pretty confident that most firms (especially sole proprietorshisps) are more capable of affording their own optimization process. for any given employee, the important question is whether they understand the practice of what they're paid for, relative to the labor market. the followup is whether they are industrious enough to form their own firm and compete.

0

Gameplan492 t1_jczru9p wrote

I've often felt that AI is a bit like virtual reality - it's promised a lot over the decades and is undoubtedly better than previous iterations, but it's still not a substitute for the real thing.

Take the example of code writing. It will help make engineering faster, but you still need to know what to ask for and then what do with it. Until AI can guess what we need and how and where we want it implemented, how can it really replace a human?

5

mjrossman OP t1_jczv8o2 wrote

100%, it makes engineering faster, not more real. the critical step outside of that is that AI makes the flow of ideation to execution more feasible if the cost of engineering is prohibitive enough to have made that flow infeasible in the past. this applies to RFCs as well. it's like the difference of having connected rooms in virtual reality because the environment suddenly upgraded to doors and opposable thumbs.

it doesn't take much for a layperson to hallucinate bad code via prompt right now, whereas the barrier for layperson to manifest any code used to be binary in the past. it's going to be even easier to subdivide an LLM prompt into chains of prompts. if one can load the respective codebase/docs as context (GPT-4 goes up to 32k tokens), the cost of hallucinating bad, but very relevant code, gets progressively cheaper.

right now, I expect any OSS community to progressively gain the ability to dogfood on whatever natural language the testers and powerusers are outputting. I think that major platforms, like social media, are quickly going to figure out that they can offer an experimental branch and not twiddle their thumbs around an unanswered user survey because of how easy it will be to transcribe sentiment & nuanced feedback from the comments.

point being, software doesn't impact the world because of how self-involved the team of a monolith is. software impacts the world when the modularity spikes (between many teams/firms and the larger market).

3

Shodidoren t1_jd00fn2 wrote

Good post

​

> at the end of the day, the only vibe that matters is the potential
impact of any given tinkering in any given garage, literally or
metaphorically

This is the most important part, methinks. Given how fast the tech is exploding, it's vital that we bring down the costs of essentials first. Food, drink & basic consumables. This will soften the blow of any and every commotion that happens as a result of high unemployment. We need the robots asap. The displacement of white collar workers will leak into blue collar jobs. People will chase manual jobs if their survival gets on the line. The robots may be too primitive to steal your trade job, but the junior dev that just lost his is not.

1

mjrossman OP t1_jd09x5h wrote

so long as the product is not a commodity. if the market errs towards an oligopoly, of course those firms have pricing power. that is definitely the present circumstance, however things like LLaMa, StableDiffusion, and Alpaca are demonstrating AI (the orchestrating element) can be a commodity. in other words, if your labor is so specialized that you control the pricing power, then it's further in your self-interest to be self-employed over time. if the firm employing you provides something that compels you to surrender your pricing power, then that is a bargaining cost that will likely shrink over time as it gets commodified.

1

mjrossman OP t1_jd167w5 wrote

rational market actors recognize their own worth. either they're underpaid by the company and should become independent, or they're overpaid and are insulated from competing in the open market. in either case, the tragedy of the commons is that all firms compete to the extent that they can dispose of their profit margin, and ultimately the end consumer benefits from commodification.

OTOH, with respect to artisanal goods & services, it makes further sense for employees to not be commodified as labor by a larger firm if they're artisans. they should compete in their niche market. but that is not an acceptance that the larger public market should be captive to a firm, even if that firm sets a less efficient, higher price to offset the employment of commodified labor.

1

nova_demosthenes t1_jd1t4pv wrote

It doesn't "replace a human." Just as a few people and a couple pieces of farm equipment replaced dozens or hundreds of workers on a farm, so too will AI coupled with a software architect and a couple seasoned programmers replace entire teams.

I know this because I'm already doing it.

3

Disastrous_Ball2542 t1_jd29w2g wrote

Main stream media's current push of AI labour narrative is to decrease pay for current workers in attempt to lower wage inflation--this is coordinated with mass layoffs in tech. Truth is AI is nowhere near close to replacing human workers economically and at scale, but MSM wants to push the narrative for lower worker pay

Maybe AI will replace jobs later on at scale, but currently it's just a fictional part of a narrative--just like the metaverse narrative was pushed during last crypto bull bubble

0

mjrossman OP t1_jd2agwo wrote

I'm going to push back on this. from what we know, certain jobs are being laid off more, and in tech of all sectors. this is not a story about how blue collar or service sector wages have risen to meet the cost of living (they likely never will). in those cases the robotics can conceptually replace the labor in a bespoke fashion, but the economics of scale are the limiting factor. what's being described in the jobs exposure paper are heavily routine, white-collar tasks that are being automated & scaled. it might not hit us today or tomorrow, but at some point this particular economy of scale is going dwarf the impact of outsourcing.

0

Disastrous_Ball2542 t1_jd2bh5g wrote

Current round of tech layoffs are not bc of AI but because cheap capital and money supply has tightened

It may eventually happen, but not during this economic cycle

It's like internet adoption didnt peak til 20 to 30 years after first dot com bubble--we are in the first inning of the AI bubble, large scale adoption and economies of scale are probably at least 10 years away

Current hype cycle is to raise $$$ for the AI companies current R&D, real use cases and adoption will come much much later and likely after a boom and bust cycle

Stupid AI companies like that AI lawyer glorified chat bot will be the pets.com of this AI bubble and there will be many others... a few may become Amazon but that's gonna be 10 or 20 years away, and that company may be OpenAI or it may not even been started yet

Also human labour is cheap cheap cheap... will be a long ass time before a bespoke robot is cheaper than an illegal Mexican, if ever

0

mjrossman OP t1_jd2c8ty wrote

my point was that some jobs are more exposed to layoffs because of their nature. likewise, some aspects of labor economy are exposed in some way to future technological discoveries no matter what they are. I would say we are more than one inning in; there's OSS text-to-image and NLP that's trained at scale and inferred on retail hardware, with the corresponding backlash. the inning we're in is the battle over executive function, or the scale at which human workers opensource the standard operating procedures they're most familiar with. the takeaway should be that monolithic firms with tens of thousands of employees are volatile enough as it is, but the least exposed jobs within, with respect to AI, are going to want to compete as smaller, more insulated firms.

0

Disastrous_Ball2542 t1_jd2co7w wrote

Not to be rude but ill be blunt. It's a lot of words you've typed but you ain't saying much other than hey its not happening now but some time in the future AI will replace a lot of jobs, some more likely than others... not exactly an original or controversial take on things

0

mjrossman OP t1_jd2dube wrote

with all due respect, isn't it kind of a cliche to preface something with "not to be rude" with the self-awareness that you're about to state something condescending? and I'd argue the same point of originality or productivity when it comes to estimates of scale for displacement or acceleration. not going to belabor the point, but I'm just emphasizing that "AI replace jobs" is a red herring debate and a waste of time, the actual debate is whether the nature of the firm can be challenged, given the observed change to the market. "If you don't believe me or don't get it, I don't have time to try to convince you, sorry."

0

Disastrous_Ball2542 t1_jd2gd6m wrote

Ok I'll just be blunt this time then. Your question is basically asking the same thing, whether the market is evaluating cost effectiveness of AI replacing job it is basically same as evaluating market value of nature of change in the firm

AI has reduced $100mil in payroll vs the perceived market value of XYZ change in the nature in the firm has generated $100mil reduction in expenses or $100mil increase in value = basically same thing

2

mjrossman OP t1_jd2gyxk wrote

precisely the point. but it's not a theoretical conjecture, there is a very practical connection between the job exposure paper, several sources of labor market truth, and the current capabilities of Langchain + Alpaca. everyone should be asking why the public should bear anywhere the same cost to necessitate the multiple that VCs/CEOs/EAs/consultants are compensated for, given how exposed these sectors are at present.

1

Disastrous_Ball2542 t1_jd2h47l wrote

It is theoretical bc currently (and for foreseeable future) value multiples are more affected by interest rates than AI or any other technology

For purely speculative tech like AI, valuations don't matter, momentum and money flow does ie. Pets.com during dot com bubble

1

mjrossman OP t1_jd2hn4a wrote

are we absolutely sure that the average multiple during high rate environments is going to stabilize at a minimum, and during a zero-rate moment in the not-too-distant future, are average multiples going to reach a maximum because they must be dictated by retail speculation as the buyer of last resort? I'm willing to argue no to both. we're going to observe disruption that supercedes the macro sentiment.

1

mjrossman OP t1_jd2i3m5 wrote

no, I'm the guy saying that books can easily sell online because they're nonperishable, dense, and can be packaged in a garage. and regardless of the chatgpt hype, we're literally days after the discovery that someone can package LLMs that hit the same benchmarks from their garage.

1

mjrossman OP t1_jd2lq7e wrote

  1. the training & inference costs have dropped to triple digits and a phone app, respectively.
  2. given the preexisting codebase for distributed training, some non-negligible fraction of the billions of GPUs are going to be volunteered in an exascale fashion not unlike Folding@Home.
  3. given that many business processes have already been articulated & opensourced in natural language, effectively any SME has the means to finetune their own nuances & SOPs to drastically lower training costs and turnover for new employees. this is a multimodal trend, any apprentice in the world can snap a photo of what they're doing and ask an LLM what to do next. eventually, it will be video if that modality can be inferred on mobile hardware.
  4. admission to the bar and license might be the bottleneck for lawyers, but it is no longer the same bottleneck for incorporation and other legal services
  5. given how much operational budget in hospitals goes to administrative work, I'm curious to see how the people deal with their medical bills in the next couple of years.
  6. we haven't even confronted garage-tier sentiment analysis. I genuinely wonder how many markets get arbitraged due to this, starting with social media dogfooding.
  7. what's the necessary cost of mainstream journalism to the general public? I'm sure you'd agree that should be weighed. same as 6), what's newsworthy & why should it be published by a corporate media company?
  8. on the tail-end to this, legislature & lobbying costs just got profoundly cheaper. also cheaper to pick apart pork-barrel or other inconsistencies therein.

these are just a few downstream effects. and I'm leaving out the parallel gains in manufacturing automation, machine vision, crowdsourcing, etc.

1

nova_demosthenes t1_jd5co5y wrote

Software architects design software or modifications into "chunks" that perform simple operations. Since many of those chunks have established "convention," they are autogenerated.

The newer parts are then built synthetically by AI by scanning countless samples, interpreting them down to sub-components, and stitching together a new piece of software that's a reasonable approximation of what the chunk is described to need to do in human language.

Your software engineers then review and verify the code.

So it's incredibly quick iterations.

3