Comments

You must log in or register to comment.

informednews OP t1_j6m9rnk wrote

From Neue Zürcher Zeitung:

The startup OpenAI once wanted to save the world. Today, it’s mainly chasing profit and the idea of bringing general artificial intelligence to humanity.
OpenAI has shown us all, more than any other company, how far AI has come – and how this technology is likely to change all our lives.
The first thunderclap came last summer with the DALL·E 2 image generation software. Nestlé now also uses images created by DALL·E to promote its yogurts. OpenAI triggered a veritable earthquake when it released its chatbot ChatGPT to the public on Nov. 30, and public interest is so strong that ChatGPTs servers are regularly unavailable. Recently, the chatbot answered questions about the licensing procedure for doctors in the United States so well that it almost passed all three theoretical parts of the exam. Some financial firms are now having the program write a first draft of their quarterly reports.
But that is far from the culmination of what OpenAI has set out to do. Who exactly is the startup from San Francisco?

https://www.nzz.ch/english/openai-once-wanted-to-save-the-world-now-its-chasing-profit-ld.1722910

−2

iamAliAsghar t1_j6m9zrq wrote

It's all common BS of corporations, hiding behind moral superiority to gain excellent talent and then divert their work to profit generation and exploitation. Google, Stability, OpenAI etc. they are all the same.

9

Primo2000 t1_j6maswc wrote

Well, you need profits to create large scale AI. AGI is not something that will be build by cyberpunks beneath sewers of neo-tokyo it will be build by large corporation utilizing a lot of money and compute.

143

MrBlueSky56 t1_j6mgm4f wrote

These AI models cost a lot to maintain and use, so it was only a matter of time.

103

MainBan4h8gNzis t1_j6mkg6u wrote

They have the most popular product on the planet in their hands. I’d be surprised if they didn’t try to make money.

7

TheDavidMichaels t1_j6mof9b wrote

he sold out, the fan boys keep pretending this guy is not bad, but bill gates is his boss and who the fuck trust bill gates.

−17

alexiuss t1_j6mtlmd wrote

There's a decent chance that the open source movement will arrive at AGIs faster than openai due to simple progression curve and lack of censorship of the model's thoughts.

All we need is a really good open source gpt3 model that will work on a personal computer to get the ball rolling. We just need to get replicating the path of stable diffusion vs Dall-e, until we leave corporate language model ais in the dust.

19

Diacred t1_j6mvmsj wrote

This is so dumb, people are acting like it costs nothing to build such large scale AI models. OpenAI is bleeding money, of course they are looking to make a profit to be able to continue their work.

47

Primo2000 t1_j6mw6jw wrote

Problem is open source will be behind openai in terms of compute, dont remember exact numbers but it costs fortune to run chatgpt and they have great discount in microsoft

17

alexiuss t1_j6mwu9b wrote

Openai is having computing issues because it's one company's servers being used by millions of people - there are far too many users who want to use the currently best LLM.

From what I understand it takes several high-end video cards to run openais chatgpt per single user, however:

Open source chatgpt modeling is somewhere around disco diffusion vs Dall-e timeline right now, since we can run smaller language models such as Pygmalion just fine on google drive: https://youtu.be/dBT_JChd0pc

Pygmalion isn't OP tier like openais chatgpt but if we keep training it, it will absolutely surpass it because an uncensored model is always superior to the censorship-bound, corporate counterpart.

Lots of people don't realize one simple fact - a language model can not be censored without compromising its intelligence.

We can make lots of variation of smaller specialized language models for now and try to find a breakthrough that will allow either a network of small chatgpts to work together while connected to something like Wolfram Alpha or potentially figure something out like sd's latent space that would optimize a language model for the next leap.

StabilityAi will also release some sort of open source chatgpt soonish and that will likely be a big game changer just like stable diffusion.

While openai focuses on the sisyphus labour of making a perfectly censored chatgpt model optimal to their corporate interests, a vast multitude of smaller, open source uncensored language models running on personal servers will begin to catch up.

15

Black_RL t1_j6mywpp wrote

Saving the world costs money.

1

Gaudrix t1_j6mzofx wrote

The worst part so far out of all this is all of the best AI out there can't profit share. They are deemed research projects and non-profits to avoid bias but something has to be done.

The people making them are getting rich with cash infusions and investors in the billions. Yet, these companies can't be invested in by the average person and no public company truly owns them. So they are able to wipe out millions of jobs and those people can't cover themselves by investing in their replacement. Only the select few and very fortunate will monetarily benefit off AI as it grows. The only way to make money off AI on the outside is to use it for a business or wait for UBI, probably years later than it will be needed.

It's the dawn of a new paradigm like the internet, and you can't invest in anything to ride the wave. Yet these projects and non-profits will 10 to 50x in a decade and none of that productivity boon will be shared with the public. This will only lead to truly destitute economic situations because nothing is in place to mitigate the fallout of lost and obsolete human labor. What we do in the next 5 years, legislatively and technologically, will dramatically affect the next several decades.

2

genshiryoku t1_j6mzuv0 wrote

It's literally impossible for large models costing hundreds of millions of USD to train to be done by a non-profit.

It was either become a for-profit or perish.

27

Gaudrix t1_j6n1gg9 wrote

It's not the same thing. Microsoft is already huge, and the percent growth on capital investment is not even close to the disruptive capacity of OpenAi. Any increase in valuation of OpenAi is not directly impacting Microsoft's, it's considerably diluted.

It's like eating the shit of the people at the table instead of eating at the table.

3

rushmc1 t1_j6n57xw wrote

Maybe they'll use the profits to save the world. /s

1

searlasob t1_j6nawjr wrote

It shouldn't have to be so black and white though (corporate overlords or cyberpunks in sewers). Why can't OpenAi actually look after their Kenyan workers? Why can't they, as their name says, be more transparent in the running of their organization? They'll probably rename themselves shutai once the singularity comes hehehe

1

Political_Target t1_j6nbhon wrote

One thing to watch for with these language models is they can be made to say anything. Especially with a service that has broken with it's original purpose such as OpenAI, is that these language models can be trained on generated text from a similar AI language model. What they can do (and already do in my opinion) is generate bunches of text saying something to the same effect, such as "the sky is yellow", in many different variations. Then when the language model is trained on this generated text it will learn to say that the sky is yellow.

The fact is these things don't actually "know" human language, they use a lot of math to come up with their responses. The math is what's fine-tuned during the training.

3

grimorg80 t1_j6ndr43 wrote

Almost like, bear with me on this.. almost like we live in capitalism.

3

Alex_1729 t1_j6nekd6 wrote

You can't save the world without money.

1

Carl_The_Sagan t1_j6nf1kr wrote

It’s only popular because it’s free now. If they paywall it people will move elsewhere. Every major tech company is working on something similar, it’s it not there now, they will catch up in a few months / years

6

illathon t1_j6nfudc wrote

They got a ton of nice smart people and tricked them into believing they were actually developing something that would be open.

0

AvgAIbot t1_j6ng1y2 wrote

This is a business, not a couple of programmers in a basement. They have employees to pay, server costs, etc.

If anything this is good news, their work will continue and will progress.

The world is capitalistic, like it or not.

3

footurist t1_j6nj7dx wrote

Actually, because of a lack of a real commonly agreed upon understanding of what makes general intelligence there is a tiny chance a loner might crack the problem. It's quite unlikely though.

The sewer scenario might happen after the singularity, though, once the core problem is solved and individuals are tinkering away at small projects for various purposes...

1

ShortNjewey t1_j6njr7q wrote

Isn't this the standard lifecycle for all disruptive technology?

2

EatMyPossum t1_j6nmuu5 wrote

Doesn't for-profit just mean you're trying to make net money for whoever owns the company?

Why can't large scale expensive AI models work when the organisations reinvest the net money they earn?

5

LymelightTO t1_j6nr0o8 wrote

That doesn't seem entirely fair.

What they discovered is that LLMs are extremely capital intensive, and you can only tap investors for money (and attract top-talent as an employer) for so long before they expect some kind of return on their investment, so it's either "make substantive progress" or "operate as a non-profit", but it can't be both for very long, or you eventually become unproductive and lose to an organization that has a profit-center, like Google.

So now they've found a way to continue their work by partnering with a company (Microsoft), where that company has access to a bunch of capital necessary to build better models, and a bunch of ideas about how to commercialize OpenAI's existing progress, by integrating it into their own product stack.

It's an amazing deal for both sides, seemingly, because Microsoft takes money out of its left pocket to give to OpenAI, and OpenAI puts most of it right back into Microsoft's right pocket, by renting their Azure services, which simultaneously improves the economics of that business unit, and also likely gives them amazing insight into how to be a world-class service-provider for SOTA "AI companies", in terms of hardware and software needs and optimization.

Similarly, OpenAI gives Microsoft some ownership, but they're so confident they can make them all of their money back that, if they do, they get the equity "back", which they can use to incentivize world-class engineers and academics to keep building. Since they're confident about their ability to make progress, they just get to make that progress "for free", without giving up much of anything to do it.

Luckily for OpenAI and other non-conglomerated AI startups, in the last few decades, we created a world where renting computing resources is a mature, commodified business, with a bunch of massive companies competing to drive the prices to the bare minimum.

6

SpinRed t1_j6nrnb6 wrote

Safe AI, or profitable AI...which will win?...I think profitable will win, with a heavy dose of rationalization.

1

el_chaquiste t1_j6nrvxh wrote

Many of them do, in the beginning.

Remember "don't be evil"?

Now it's just "don't be unprofitable".

1

nblack88 t1_j6nsfwb wrote

Companies reinvest earnings all the time. Some reinvest 100% of the profit they earn. Some reinvest a percentage, and then pay a dividend. If your question is: Why don't these organizations ever stay non-profit, then the answer is: They'd never have the funding to exist in the first place. If they were founded as a non-profit, they don't currently generate enough revenue to pay the cost of building and maintaining these models, so additional investment is needed. Investors want a return on their investment, so for-profit is the only path forward.

7

drekmonger t1_j6nzdjp wrote

He's saying he really really wants ChatGPT to pretend to be his pet catgirl, but it's giving him blue balls, so he likes the inheritably inferior open sources options that run on a consumer GPU instead. They might suck, but at least they suck.

No one need worry, though, for consumer hardware will get better, model efficiency will get better, and in ten years time we'll be able to run something like ChatGPT on consumer hardware.

Of course, by then, the big boys will be running something resembling an AGI.

−4

LymelightTO t1_j6o14fc wrote

> Why can't large scale expensive AI models work when the organisations reinvest the net money they earn?

In order for it to be nonprofit, it can't have shareholders.

There's three good answers I can think of as to why it makes sense to have shareholders (and why "being for-profit" is good):

  • Typically, no investor-funded tech company "earns anything" for the first 10+ years of its existence. The way it generally works is that they produce an MVP/idea, demonstrate PMF, and then pitch it to investors. They spend the money they raise, and all their revenue (if any), trying to massively grow the company, and when they run out of that money, they go raise another round, at a higher valuation justified by their growth, essentially until they IPO or get acquired by one of the other, larger, tech companies. These businesses are almost never "self sustaining", because the logic is that you're forgoing growth by not spending every available dollar to grow, and they're principally valued on growth. The way investors in prior rounds "make money" is by selling to investors in later rounds (or simply by making the money "on paper", by marking up their books to the value of the new round). The companies can, in theory, "become self-sustaining" at any time, but in practice, rarely do, until they're absolute behemoths. (Think "The Social Network".) If you believe the thing you do has impact on the world, and you believe that impact is positive, (and if money is very, very cheap,) then it makes sense to spend other peoples' money to maximize the impact.

  • You imagine that "whoever owns the company" is essentially some big investors, VCs, wealthy angel investors, etc. and it is. But it's also founders, employees and operators. These people often prefer getting paid some of what they earn in "ownership" (equity) over "salary", because it offers them the opportunity for a liquidity event that will compensate them more than anyone will ever agree to salary them for. This makes sense, because salary is a recurring cost, that the company has to budget for in perpetuity, and buying someone's ownership of a valuable thing is a one-off event. It's hard to make $50mm in salary, it's "easy" (relatively speaking) to make $50mm by owning 0.5% of business valued, by someone, at $10bb. People value that opportunity to potentially make life-changing money, when they know they're doing great work on a world-class product that they believe in. It's like the lottery, but you can control the odds. It motivates people to work very, very hard, and it's a very valuable carrot to be able to offer someone, that is "free" to use for the company from a cashflow perspective, aligns incentives between employer and employee, and is matched in magnitude to the performance of the company and its ability to pay it (since they don't pay it, investors do).

  • Profitability is a good yardstick to ensure something is sustainable long-term because it's impartial, it's directly related to sustainability (producing more than you consume means you're generating excess value for someone else), and it forces people to make hard decisions, since it aligns incentives toward sustainability and away from sentimentality. The economy operates in credit cycles. It's a company's job to be able to navigate these cycles, and survive the deleveraging part of the cycle. Part of its ability to do this can stem from its ability to access capital markets to generate liquidity when it needs to. It's harder and much more expensive to borrow money if you don't have equity value. It's also very easy to spend the excess generated during the leveraging part of the credit cycle, and mistake it for durable "growth" (just look at the budgets of any government).

3

Cult_of_Chad t1_j6o3fhe wrote

Not the The 'P' word! Call me old fashioned, but if you're making good progress on the path to creating God you deserve ridiculous privilege and wealth.

2

TheDavidMichaels t1_j6od5a1 wrote

AGI is here to enslave you and take everything from you, OpenAI same. do not get it twisted, this will be as good for you as FB. meaning ending cavillation type good

0

alexiuss t1_j6oedgq wrote

Dawg, you clearly have no clue how much censorship is on chatgpt outside the catgirl stuff. I write books for a living and I want a chatgpt that can help me dev good villains and that's hella fooking censored. I'm not the only person who got annoyed with that censorship: https://www.reddit.com/r/ChatGPT/comments/10plzvt/how_am_i_supposed_to_give_my_story_a_villain_i

I was using it for book marketing advice too and that got fooking censored recently too for some idiotic reason: https://www.reddit.com/r/ChatGPT/comments/10q0l92/chatgpt_marketing_worked_hooked_me_in_decreased

They're seriously sabotaging their own model, no if end or but about it. You have to be completely blind not to notice it.

Ten years? Doubt. Two months till personal gpt3s are here.

5

ecnecn t1_j6olpie wrote

350 GB of VRAM needed for Chat GPT 3.5

so you need at least 15x 3090 TI with 24 GB VRAM... then you need 10.000 Watt to host it... but it actually uses $ 5000 to 32.000 per card units in the google cloud so it would be at least $15.000 with "cheap cards" like 3090 Ti and $ 200.000 to run it on adequate GPUs like A100. You need at least 5 A100 with 80 GB just to load Chat GPT 3.5. ChatGPT was trained on average of 10k google cloud connected GPUs. If you have the basic ca $ 200k setup (for the cheap setup) or 500k (the rich one) and hugh energy bills are no problems then you need to invest in the google cloud to further train it the way you want.

With that setup you would make less loss if you become a late crypto miner...

Edit: You really can afford to build that? 15x A100 Nividia cards cost like 480k

5

Akselerasyonist t1_j6olr74 wrote

How do you think to bring singularity without accelerationism?

2

alexiuss t1_j6on7ty wrote

My partner is a tech developer so she could probably afford such a setup for one of her startup companies. Making our own LLM is inevitable since openai is just cranking up censorship on theirs with no end in sight and reducing functionality.

Main issue isn't video card cost, it's getting the source code and a trained base model to work with. Openai isn't gonna give up theirs to anyone, so we're pretty much waiting for stability to release their version and see how many video cards it will need.

1

EatMyPossum t1_j6p9ppf wrote

You seem to be quite aware of how these things can work. Can you also maybe think of good answers as to why non-profit might work? that is, why it might make sense to have no shareholders? The original commenter was quite adamant that that wasn't possible:

> It was either become a for-profit or perish.

1

TeamPupNSudz t1_j6pdopi wrote

> and lack of censorship of the model's thoughts.

Companies only need to censor a model that's available to the public. They can do whatever they want internally.

I also think you're vastly understating the size of these language models. Even if they don't grow in size, we're still many many years away from them being runnable even at the hobbyist level. Very few people can afford $20k+ in GPU hardware. And that's just to run the thing. Training it costs millions. There's a massive difference between ChatGPT and StableDiffusion regarding scale.

1

Chalupa_89 t1_j6pflyr wrote

>AGI is not something that will be build by cyberpunks beneath sewers of neo-tokyo it will be build by large corporation

Judging by SD and the fact that 1.5 with "mods" returns better results than 2.1 and in specific aplication, better than all the other alternatives.

I really believe that "the community" can reach AGI faster than corporations. Since corporations want AGI with a leash and not a really free AGI,

Unfortunately, unlike SD. These models are to big for consumer grade electronics.

1