Submitted by mjrossman t3_11ws42u in Futurology
TL;DR - LLM finetuning costs just dropped precipitously with Alpaca. The same has yet to manifest for other model frameworks (linked below), but assumedly they will. the recent job impact paper (linked below) shows which jobs are most exposed, but what it doesn't reveal is the ease by which firms and broader, multiply-staged enterprises self-organize and compete on this basis.
there's been a lot of recent developments, obviously the most pressing trend has been AI's societal impact. if anyone's read the recent job impact paper, one of the factors that jumped out was the exposure of blockchain engineering to AI-based automation. whether this paper is logically sound is worth debating on its own, we should explore the significance of an effectively automated, self-regulating computing superstructure in simpler terms, and compare it to other exposed software impacts in the same scope.
most web-based software has fallen on a spectrum: fully obscure cloud applications with some unknown scope of human supervision, all the way to fully self-forking codebases with open request for comment. if you take something like wikipedia or reddit and apply it to this spectrum, you might guess that such platforms fall closer to the open end than the obscure end. now consider StableDiffusion/LLaMa/Alpaca vs Midjourney/ChatGPT. I can't understate this enough: if we demonstrate the economics favor the open end (as can be observed with recent LLM developments), then we have to extend this line of reasoning to the downstream markets.
all human jobs are either self-supervised or fall in a hierarchy of supervisors, and this determines the labor market. in fact, the biggest argument against "robots taking our jobs" has been macroeconomics of how cheap human labor can be. and entire governments make bets on this idea. in case anyone is interested, this is a recent high-level perception of "employer" vs "non-employer" firms: https://cdn.advocacy.sba.gov/wp-content/uploads/2020/11/05122043/Small-Business-FAQ-2020.pdf
this begs the question, what is the free market value for executive function? I mean, really, in stark terms, how expensive is it to form a de facto union within an employer firm, or even incorporate multiple partners into a limited liability small business model? if someone makes the claim: "we, a team of 5, have the means to deploy the corporate equivalent of a unicorn with 1/1000th the capital cost", the corollary that should be asked: how much the public market is willing to spend on an incumbent firm that spends millions to compensate a CEO and employs thousands at "competitive" rates? what minimum of private equity is needed to guarantee market capture for a startup? I guarantee you those margins trend to zero over years, if not months. the commercial/shareholder landscape is about to become extremely interesting.
but back to the point: what does a highly exposed, self-regulating software ecosystem mean on its own terms? why are Alpaca & Langchain so significant in this context? the answer is how catalyzed dogfooding becomes over a matter of months. the cost of finetuning a small, "run on your Raspberry Pi" LLM on any subdivision of knowledge (especially codebases) just dropped to retail levels. the next cost that drops is discovery of high-level SOPs with low-level daisy-chaining of these diverse models. and given the preexisting, battletested examples of n-tiered application architecture on blockchains, the marginal cost of smart policy development, testing, and auditing also drops as well over the next few years. that's the market for arbitrary executive function of any group of market participants. with respect to ML frameworks like sparsely-gated MoE, world models, multimodality, and adaptive agents: we won't see how the shoe drops if the costs haven't met the critical threshold, but it should be clear that we can assume they will and guess as to when they drop to that threshold. and I haven't even described the potential impact of Learning@Home in that respect.
if anyone has heard of the "Mechanical Turk", it's a way in which humans can cooperate to falsely appear as a complex mechanism. as Charlie Munger says, "show me the incentives [or costs], and I'll show you the outcome". It's not about how AI displaces us, it's about what AI compels us to freely displace. at the end of the day, the only vibe that matters is the potential impact of any given tinkering in any given garage, literally or metaphorically. and just like natural ecology, we can and should perceive that the marginal gains of obscure AI or a deliberately inefficient labor economy is going to be dwarfed by something open-source. just food for thought.
Gameplan492 t1_jczru9p wrote
I've often felt that AI is a bit like virtual reality - it's promised a lot over the decades and is undoubtedly better than previous iterations, but it's still not a substitute for the real thing.
Take the example of code writing. It will help make engineering faster, but you still need to know what to ask for and then what do with it. Until AI can guess what we need and how and where we want it implemented, how can it really replace a human?