Submitted by Kaarssteun t3_zz3lwt in singularity
To those with a slight grasp on LLMs, you might have noticed ChatGPT isn't that big of a deal architecturally speaking. It's using an updated version of GPT - GPT 3.5, fine-tuned on conversational data, with RLHF (reinforcement learning with human feedback)
Everyone could have had this functionality, a smart chatbot capable of slicing a big chunk of your workload for you, with a little prompt engineering in openai's playground.
No source for this one, but if I recall correctly ChatGPT wasn't that big of a project - understandable given it's not much more than an easy-to-use pre-prompted interface to GPT 3.5. OpenAI likely did not expect this kind of a reaction from the general public, given their three previous big language models were certainly not talked about on the streets. ChatGPT being in the familiar format of a simple chat interface wholly dictated its success.
ChatGPT is officially a research preview - which subsequently exploded. Instead of collecting human feedback from little extra computational cost, they now face hordes of people sucking the FLOPS out of their vaults for puny tasks, expecting this to remain readily available and free - while the costs for openai are "eye-watering".
Openai cannot shut this thing down anymore, the cat's out of the bag. This is of course exciting from an r/singularity user's perspective; google is scrambling to cling to the reigns of every internet user, and AI awareness is higher than it has ever been.
Just can't imagine this was the optimal outcome for openai!
jdmcnair t1_j29hpnz wrote
For all of the FLOPS people are sucking down, OpenAI is getting a fucking massive boost in that RLHF you mention. It may not be paying for itself yet, but it's more than worth the investment for the real-world human training context they're getting.
And when they do decide to close down the public preview and go for a subscription model, lots of people will go for it, because they've already proven out how clearly useful it is.