Submitted by Kaarssteun t3_zz3lwt in singularity

To those with a slight grasp on LLMs, you might have noticed ChatGPT isn't that big of a deal architecturally speaking. It's using an updated version of GPT - GPT 3.5, fine-tuned on conversational data, with RLHF (reinforcement learning with human feedback)

Everyone could have had this functionality, a smart chatbot capable of slicing a big chunk of your workload for you, with a little prompt engineering in openai's playground.

No source for this one, but if I recall correctly ChatGPT wasn't that big of a project - understandable given it's not much more than an easy-to-use pre-prompted interface to GPT 3.5. OpenAI likely did not expect this kind of a reaction from the general public, given their three previous big language models were certainly not talked about on the streets. ChatGPT being in the familiar format of a simple chat interface wholly dictated its success.

ChatGPT is officially a research preview - which subsequently exploded. Instead of collecting human feedback from little extra computational cost, they now face hordes of people sucking the FLOPS out of their vaults for puny tasks, expecting this to remain readily available and free - while the costs for openai are "eye-watering".

Openai cannot shut this thing down anymore, the cat's out of the bag. This is of course exciting from an r/singularity user's perspective; google is scrambling to cling to the reigns of every internet user, and AI awareness is higher than it has ever been.

Just can't imagine this was the optimal outcome for openai!

70

Comments

You must log in or register to comment.

hauntedhivezzz t1_j299yeo wrote

Umm, the optimal outcome was a viral hit / free marketing, which would lead to an excited user base who would then pay for their product.

89

Kinexity t1_j29az33 wrote

The only things they did wrong was not collecting user feedback from the beginning and not starting as a slow rollout just like they did with dalle 2 but that could have hampered the popularity. Other than this I'm not sure what are trying to say here and what else were they supposed to do.

2

lloesche t1_j29b5sv wrote

Idk, they could take it out of research/beta mode, switch to a $30/month subscription, citing the enormous cost associated with providing the service, and nobody would bat an eye.

That is until Google comes out with their own, provided free of charge, injected with what Google does best, ads.

9

YouGotNieds t1_j29b9y2 wrote

I disagree with most of what you are saying.

First you can pay for lifetime access to chatgpt now even at its current version I am sure many people would consider buying pro versions of this.

Second, don't forget Chatgpt is version 1 at the moment. It can only go up from here and start being used in finance, accounting, hr, and many more areas of business that we can't even think of yet.

Third, I am sure the data that openai is getting from chatgpt in itself is extremely valuable for the company as they get more data and type of tasks people might ask. This gives them an actual idea of how they could create a profitable product that could compete with the sorts of google or alexa but just way better.

17

Think_Olive_1000 t1_j29bca8 wrote

They've cut the number of requests you can make an hour, addresses the cost issue somewhat. I think they can plug the hole made by unexpected influx with their marketing budget. It's gotta be one of the most succesfull tech product launches of all time with number of unique and new users reaching the million mark within a week of going live.

12

blueSGL t1_j29d1ee wrote

> There is no product for them to sell given people now expect this service to be free.

I don't get the argument.

If they want to yoink it and put it behind a paywall where you pay for tokens they could do that today.

If people still want to use it they pay or stop using it.

This has happened before. (look at Dalle2)

13

apinkphoenix t1_j29eg4m wrote

This is nonsense. You are aware that they can shut down their servers at any time they want, right? Even though they describe their costs as “eye-watering” it doesn’t mean they can’t afford it.

As for there being an expectation of being free… lol. This is a very useful tool we’re talking about here. It’s only going to get better with time. They can and will monetise it and you’ll damn well like it!

52

jloverich t1_j29eu0p wrote

Fwiw you.com already has an llm similar to chatgpt on their website.

1

NoName847 t1_j29he90 wrote

didnt sam said that all the feedback is amazing and exciting on twitter? they can shut it down any minute if it wouldnt be what they want

if people really throw a tantrum when its costing money I think these entitled , irrational people can be ignored

edit: "we are learning so much from ChatGPT; it's going to get a lot better, less annoying, and more useful fast." sounds like they love it

6

jdmcnair t1_j29hpnz wrote

For all of the FLOPS people are sucking down, OpenAI is getting a fucking massive boost in that RLHF you mention. It may not be paying for itself yet, but it's more than worth the investment for the real-world human training context they're getting.

And when they do decide to close down the public preview and go for a subscription model, lots of people will go for it, because they've already proven out how clearly useful it is.

148

Zermelane t1_j29mja2 wrote

Sounds scammy. ChatGPT itself does not have an official API at all and the research preview is free to use, while with the other models (that do have official APIs) that OpenAI does charge for, they charge by usage, and purchasing lifetime access from a third party is... not likely to be a good deal, is the best I can say.

7

nebson10 t1_j29ndrw wrote

They can shut down the free research preview at any time. If they haven't shut it down yet, it must be to their benefit to keep it open for some reason.

1

No_Ask_994 t1_j29nq8w wrote

Of course they can shut it down.

If they do not, its because is worth it for them.

I expect an assistant over gpt-4 next year, much better, and of course, Pay per use

2

Equivalent-Ice-7274 t1_j29o5go wrote

Agreed - Google could add hyperlinks throughout the response text, as well as banner ads above and below, and perhaps even video commercials before you get to see the ai’s response. Then they could charge a subscription per employee for companies that want to buy, just like they do with Google Workspace. Google will make mountains of money off of this.

0

el_chaquiste t1_j29rvra wrote

We are masses of unpaid beta tester for their system, finding bugs and awkward prompts they need to edit by hand. Definitely worth it for them.

Thanks to that, GPT4 will be far less dumb.

3

TheTomatoBoy9 t1_j29s315 wrote

The expectations for it to be free are with the current version. Subsequent versions will easily be marketed as premium and sold through subscriptions.

Then, this doesn't even address the whole business market where expensive licenses can be sold.

Finally, they are bankrolled by Microsoft, among others. Eye watering costs are only eye watering to small startups. It's not much of a problem when the company backing you is sitting on $110 billion in cash.

In the tech world, you can lose money for years if you can sell a good growth story. Especially with backers like Microsoft.

3

Imaginary_Ad307 t1_j29wohs wrote

Ai i remember they are backed by Microsoft, so they can take it and most certainly will make profit from it in the near future

1

blueSGL t1_j29xsiu wrote

but that's not what ChatGPT is offering.

Anywhere that is able to do the sorts of things that ChatGPT does will be in a 'loss leader' phase to begin with to attract customers. Or a [x] tokens per month are free, or other marketing trick.

Until inference cost is lower than the cash generated via advertising all services will be losing money, at that point it's either start charging or stop the service.


ChatGPT has succeeded in getting the name out. They are losing money by operating (if the training data they are getting from people is worth less than inference costs) so the solution is to start charging money.

Continually running a product that is in the red to prevent competitors products who are also in the red from succeeding seems like poor decision making in the long term.

5

treedmt t1_j29yope wrote

LUCI is also built on a fine tuned gpt3.5 model, so pretty close to chatgpt in terms of capabilities.

They have a very different monetisation model afaik. They are tokenising the promise of future revenue to monetise, instead of charging customers up front.

> if the training data is worth less than the inference cost.

The thesis is that training data could be worth much more than inference cost, if it is high quality, unique, and targeted to one format (eg. problem:solution or question:answer)

In fact, I believe they’re rolling out “ask-to-earn” very shortly, which will reward users for asking high quality questions and rating the answers, in Luci credits. The focus appears to be solely on accumulating a massive high quality QA database, which will have far more value in the future.

I’m not aware of any rate limits yet but naturally they may be applied to prevent spam etc., however keeping the base model free is core to their data collection strategy.

2

Mementoroid t1_j29ysks wrote

Hopefully one day there'll be an AI that can generate pennies in my account for everytime I read "The cat is out of the bag" or good already old "the genie is out of the lamp".

0

No_Ninja3309_NoNoYes t1_j2a7kfc wrote

Yeah but how much is it? A million dollars an hour, more? Methinks they're exaggerating to sound cooler than they are.

2

SoylentRox t1_j2adgyv wrote

Yep. I want a premium tier where I can make as queries and get immediate responses with no cooldowns. I would expect a monthly plan where i get a certain number of queries included and can buy more.

In a few years I would expect my employer to pay for the subscription but in the immediate future I'm happy to do so. I don't ask it to write anything I can't write but it saves all this time.

20

SoylentRox t1_j2aehvs wrote

I don't see how "lifetime access" makes any sense.

(1) Assuming it's to the current model and not future updates, that would be like buying a "lifetime copy" of MS-DOS internal beta 0.7 (whatever they called it back then), or an iphone 1 loaded with a pre-release copy of the OS.

It may work offline for your lifetime, but it's going to be useless compared to what's available within months.

(2) who's hosting it? GPT-3 is around 180 billion parameters, or 720 gigabytes of memory. This means the only thing capable of running it currently is a cluster of 8 Nvidia A100s with 80 Gb memory each, and each costs $25,000 and consumes 400 watts of power.

I'm not sure how "fast" it is, if you see chatGPT typing for 30 seconds are you drawing 3.2 kilowatts of power just for your session? I don't think it's that high, probably the delays are it's servicing other users.

1

Schyte96 t1_j2akuv0 wrote

I am expecting it to go paid in short order TBH. Let's say it's 10 USD/mo like Copilot, that might be the best use of 10 USD in the world.

1

Glitched-Lies t1_j2artba wrote

It is very much over hyped. That much is true. Nobody expected it to be so incredibly popular. I guess it's just because it codes and lots of people made it popular for no reason.

2

GuyWithLag t1_j2atfmc wrote

Fuck it, I'm an IT pro and the way that ChatGPT can generate corporate boilerplate already saves me hours per month; I'd be willing to pay for a subscription just for that, and I assume I'll find more uses as it improves.

2

DukkyDrake t1_j2aztlg wrote

>expecting this to remain readily available and free - while the costs for openai are "eye-watering"

The rest of that quote.

2

_z_o t1_j2b2g9v wrote

The more traffic it gets more investment money it will be given. Investors expect that they figure out a profitable mode in the future. An AI based search engine with a small wiki like answer plus AI curated links to AI validated content and products can make Google obsolete overnight. AD money will flow into it.

1

dietcheese t1_j2b8p7p wrote

They will start charging, and instead of spending millions they will make millions every month.

These people are not stupid - they have major backers funding and advising them.

10

somethingstrang t1_j2b8xgg wrote

RLHF sounds like most of the work and the improvement over base transformer architecture. Hard to recreate cause I’d imagine it’s tens of thousands of man hours involved

2

no-longer-banned t1_j2bas7o wrote

OpenAI is backed by venture capital. The money doesn’t really matter. If they needed more, there would be a line of investors miles long trying to get in on this company.

Users and data. That’s all they really care about.

1

kalydrae t1_j2be8w9 wrote

I have been trying to work out what kind of implementation of this technology you can have on the average home computer. For now, the ram/gpu required to run gpt3.5 is beyond the compute power of the average power users home equipment.

You can make a basic system that can do limited tasks for specific inputs but the chat bot using the parameter set for gpt 3.5 is too large.

And the emergent properties present in this larger model is the most interesting and useful part of the the current progress. So for NOW openAI have a huge market advantage - they have a live product with a huge existing user base and compute power to support current throttled usage.

If I were openAI, I would be looking at how to launch the paid 'beta' product for generic use and then look at the subset of interactions on the free version to see if there are use cases that could do with additional training inputs to give further enhanced interactions. Some of my nebulous thoughts on potential use cases for custom products that people might pay for include,

Roleplaying bot - partner with online role playing systems to ingest large amounts of (anonymised) conversational data and have human feedback on training based on the new model.

Developer/infrastructure/IT helper: ingest even more publicly available data sets on q&a forums, open source systems documentation and support forums, GitHub, etc

Private instances of ChatGPT that have a "commercial in confidence" license so that businesses can provide their commercial IP datasets and transform the chat bot into the company knowledge system - all data, processes procedures can be used and accessed in a dynamically linked interactive and proprietary context. (Would also need to conform with country/state privacy laws etc)

Similar private instances provided to academic institutions where all academic and student information, emails and conversations (also anonymised) can be used to train the course and subject matter expert bots that can assist academics to design courseware and students to learn and understand much faster.

I think once we can run our LLM models on home computers, all bets are off. Your fridge might have a bot to tell you the options for dinner. Your wallet will alert you when your expenses are off track from previous months. You will ask your home assistant for a daily plan and it will remind you to take your medicine and prompt you to eat/drink something depending on your current vitals... The next steps are very exciting.

I am sure there are issues with these ideas but I'm very excited to see where this all goes!

2

visarga t1_j2bi28f wrote

I expect in the next 12 months to have an open model that can rival chatGPT and runs on more accessible hardware, like 2-4 GPUs. There's a lot of space to optimise the inference cost. Flan-T5 is a step in that direction.

I think the community trend is to make small efficient models that rival the original, but run on local hardware in privacy. For now, the efficient versions are just 50% as good as GPT-3 and chatGPT.

2

10GigabitCheese t1_j2bjn5f wrote

One of the few subscriptions that would significantly add value to your day job, plus if it had access to live internet it would save hours on researching basic tasks you’ve never done before but don’t know the correct jargon for google.

It’s like a personal assistant or private tutor.

2

Superschlenz t1_j2bpvaj wrote

AFAIK, the lawyers will only shut it down if it doesn't explicitly declare paid ads as such. As long as they don't integrate ads into ChatGPT itself but only show them in the user interface, they should be OK. Of course, there is still the copyright issue if it outputs information from publishers without directing users to their websites.

0

chadbarrett t1_j2bq86c wrote

Yesterday I spent a good 2 hours googling a bunch of shit with total failure. For instance, you would think a 10 year, year over year, average electricity rate in California be easy to find but it wasn’t. Instead it was nothing but SEO dense shit talking about vague nothing. And then I decided to ask gpt and got that data in seconds. No idea where or how it found it but it did.

2

stevenbrown375 t1_j2bt7cj wrote

A few things off the top of my head:

  • Writing criteria and methodologies for marketing studies.
  • Converting table data into prose.
  • Copywriting (duh)
  • Creative brainstorming
  • Project planning and basic guidance
  • A file-naming-standards widget I’m building in Excel, and potentially in PowerApps.
  • Building a style guide
  • Writing go-to-market plans
  • javascript expressions for Adobe After Effects
  • Presentation planning
8

theghostecho t1_j2bw6nl wrote

People weren’t talking about it when gpt first came out because they were busy learning about them.

1

-ZeroRelevance- t1_j2bzq04 wrote

If you didn’t know, you can actually train a fine-tuned model through the playground if you want, you just need to supply the training set and pay a bit more, which may be a bit tricky depending on your resources though.

2

stevenbrown375 t1_j2c122l wrote

Good to know. We have data scientists here that could implement something like this but they’re working on stuff that’s way too specific to train a model on good marketing practices just for my little department. All in all though, chatGPT has been really great as-is. I feel like I have a new work buddy, and I’m ravenously consuming every GPT-4 rumor I can find.

6

LoneRedWolf24 t1_j2c5hcc wrote

As others have stated, they can shut it down if they choose to and they likely will monetize ChatGPT. However, I don't think it will be long until real competition reveals itself and a free alternative is offered.

1

Lawjarp2 t1_j2cd8kz wrote

That's why they put limits on it. If it's really useful people/companies would pay to use it and cover the costs. If it's useless then it will get shutdown eventually. I do think it will be useful for companies with GPT-4 and that's why they have released it right now to get more companies ready for it.

1

SoylentRox t1_j2ckmtj wrote

And there's a bunch of obvious automated training it could do to be specifically better at software coding.

It could complete all the challenges on sites like leetcode and code signal, learning from it's mistakes.

It could be given challenges to take an existing program and make it run faster, learning from a timing analysis.

It could take existing programs and be asked to fix the bugs so it passes a unit test.

It could be asked to write a unit test that makes an existing program fail.

And so on. Ultimately millions of separate tasks that the machine can get objective feedback on how well it did on them, and so it can refine it's skills to be above human.

3

SoylentRox t1_j2cla5j wrote

Imagine if the next best search engine was like an early version of bing and NOTHING else existed.

And nobody was remotely close to releasing anything better. Would you pay for it then?

If OpenAI starts charging for chatGPT, whatcha gonna do? Keep writing shit by hand?

The computional requirements are so expensive that realistically this is going to be a paid service maybe forever.

I say forever because compute will get much cheaper over time, but the best models will use even more compute and be much smarter. All the elite people will be using top end models, plebs using free models won't have the same resources.

1

sharkymcstevenson2 t1_j2cllos wrote

I think OpenAI sees themselves as an operating system rather than a direct-to-consumer business - it seem they are encouraring companies/startups to build services on top of OpenAI tech. I dont think they will compete directly with that ecosystem they are trying to build by offering their own service like that, since building an ecosystem is 100x more valuable over time

1

Happy-Ad9354 t1_j2cnm7v wrote

Does it forward / save all your queries to OpenAI / in their databases?

1

Educational-Nobody47 t1_j2ddsyt wrote

It is extremely likely based on your point above that their investor meetings have gone completely nuts. There has to be so many funds offered at the door right now its ridiculous, people trying very hard to get in. Doordash isn't profitable, it runs off of investor funds on a future return. It honestly could stay free because it's an infinite flow of data coming into their coffers.

Sam Altman is on record saying "We have a soft promise to our investors that one day when we create AGI or something close to it we will ask it to help us monetize and pay investors back".

2

leonidganzha t1_j2dhbg7 wrote

OpenAI got a free army of QA testers spending hours to make ChatGPT generate offensive and nsfw content. Just because it was really fun. So they got a lot of valuable human-in-the-loop data out of this, which will help them to develop their LLMs further

1

SoylentRox t1_j2ei4bm wrote

No, it makes you a digital elite.

If you own stock but rent your phone, car, and home, you can move whenever you want and always have the latest car and phone. You benefit from the extra technology.

While I don't actually rent my car or phone as I don't need either to be the absolute latest, I do rent software. As anything but the most recent version is useless to me.

For AI models it's the same idea.

I have hundreds of thousands, soon to be over 1m in stock. As much 'equity' as an extremely lucky homeowner.

0

AdminsBurnInAFire t1_j2ej85m wrote

No, the digital elite all have their possessions secured with a purchase, often multiple purchases, because they’re not foolish.

What you do not own, can always be taken from you. You don’t need to worry (too much) about your software being taken from you but you do need to worry about your house being taken from you. The only argument for renting that can be taken seriously is convenience and security always trumps convenience. The same thing for stocks, if Wall Street fucks up one day and says your stocks are worth nothing, what can you do? Meanwhile if the bank comes for your house, you have a bill of ownership protecting your rights.

2

theRIAA t1_j2ejmws wrote

> so pretty close to chatgpt in terms of capabilities

I was impressed that it could give me generic working one-liners, but that is quite far off from writing a working program with 100+ lines of code in all major languages, like ChatGPT can (effortlessly) do. But thank you for the link, it's still very useful.

1

SoylentRox t1_j2ek52w wrote

>What you do not own, can always be taken from you. You don’t need to worry (too much) about your software being taken from you but you do need to worry about your house being taken from you.

This is not a problem if you have money. Just go rent something else. Also if your landlord decides to go through the eviction procedure, there is no ASSET for you to lose.

If you own a house, and a judge decides to order it seized in a civil action (like a divorce or lawsuit), or your corrupt HOA makes up some fines of arbitrary scale and then sues you and seizes it if you can't pay, you lose the EQUITY.

I'd rather have all my assets in stock, and borrow against it if I have a need for money fast when the market is low.

1