Submitted by BobbyWOWO t3_116c4pg in singularity

Even with all the public progress that OpenAI/Microsoft has made in the past few months, I still think DeepMind will be the first to create a general intelligence. They seem to have cracked the code on Reinforcement Learning, and I think it’s probably a very intrinsic part of general intelligence and problem solving.

Either way, I usually like to keep up to date with DeepMinds progress. In the past 3 or 4 years, they usually make blog posts or release papers like 2-3 times a week. And as of Dec.14, they haven’t released a single thing. I think that was around the same time ChatGPT came out.

I would have hoped to hear something about GATO2 or an updated Sparrow, but it’s been complete radio silence for nearly 2 months. Very unlike DeepMind…

110

Comments

You must log in or register to comment.

lehcarfugu t1_j95ytw9 wrote

Google is realizing how disruptive chat bots are to its business model. They may want to stifle innovation until they have a gun to their head and forced to release (see bard)

7

GoldenRain t1_j95z7vc wrote

I think Google realized that funding all research and then making it available to OpenAI for free, while they don't return the favor isn't a viable strategy.

116

TFenrir t1_j960ilw wrote

This is probably likely, and not just for this reason. Demis Hassabis himself said recently in a time magazine article that he thinks that OpenAI (without naming them) don't contribute to the science, but take a lot from the science out there - which they use to push AI out into the world faster than he would like people to. So they probably are going to not share as much going forward.

60

paulyivgotsomething t1_j960jno wrote

Open Ai is reading their papers then implementing and distributing the resulting models. I think they were unhappy about that and stopped sharing. Oh yeah and Open AI is getting rich of the work of others and destroying the parent company that paid for the research. So i don't think you will be seeing anything for a while.

16

BobbyWOWO OP t1_j96119o wrote

Well I would argue that most companies take the work of others to build products. Like Apple didn’t invent hard drives, monitors, CPUs and operating systems, they just implemented them in an innovative way. DeepMind at its core is a research company… they had to expect that others would use their science to build products

18

lehcarfugu t1_j9639ur wrote

Well it appears that previous inventions they open sourced are going to hurt their bottom line. The transformer came from Google, and most of what you are seeing now stems from Google

It might be in their best interest to stop open sourcing stuff that will only benefit their competition

7

Redditing-Dutchman t1_j966dak wrote

In the beginning of this year Deepmind laid off a big part of it's staff (just like many tech companies).The Deepmind research facility in Edmonton was even closed completely.

Could be that progress is actually slower or even halted because of this, or they kept the basic team and fired people like the blog writer for example.

16

TFenrir t1_j96815w wrote

Ah I get you. Yeah, here's the complicated thing though - Google generally provides the most valuable AI research every year, especially if you include DeepMind.

https://thundermark.medium.com/ai-research-rankings-2022-sputnik-moment-for-china-64b693386a4

If suddenly they decide that it's more important to be... Let's say cautious, about what papers they release, what impact is that going to have? Are other companies going to step up and provide more research, or are they all going to be more cautious about sharing their findings?

5

bass6c t1_j96dhu0 wrote

Most of the technologies being used by openai are either from Google or from Deepmind. The transformer and instructed fine tuning,… are from Google brain. OpenAI recent success comes at a heavy cost for the ai community. Companies such as Google, Meta and Amazon will most likely stop publishing influential papers.

29

ipatimo t1_j96fneh wrote

When papers about nuclear topics stopped to be published, Soviet Union understood that USA is close to the creation of nuclear bomb.

86

TFenrir t1_j96hi1f wrote

I generally appreciate what you are saying, and I feel more or less the same way, in the sense that I think that these models should be in our hands sooner, rather than later, so that we can give appropriate large scale feedback... But I also think the reasoning to hold back is more complicated. I get the impression that fear of bad results is a big part of the anxiety people like Demis feel.

30

MrEloi t1_j96kdix wrote

They are in a deep sulk about OpenAI getting all the kudos and publicity.

On top of that, they are getting beaten up by Alphabet to produce something which looks good in the media.

Their main task recently has been to throw mud at OpenAI and ChatGPT.
I suppose they want to slow them down with "concerns about safety" whilst Google tries to duct tape its AI systems into a working chat system.

OpenAI's very successful launch of ChatGPT seems to have upset quite a lot of others in the AI sector .. especially those who are usually in the media spotlight.

All that said, it now seems that OpenAI have succumbed to external pressures and have been brought back into line. They have delayed the release of GPT-4 "on safety grounds".

They are also now suggesting that AI systems, hardware, training, models etc should be regulated .. again for "safety".

Being a cynic, I think that OpenAI, Google (and the US government?) have done a deal. They will retain control of the AI platforms, thus becoming a duopoly.

Startups etc will be encouraged - but will of course have to source their AI power from the big boys.

Open Source etc AI systems will be blocked .. due to "safety issues".

High power AI GPUs will only be available to the big boys.

Getty Images, Shutterstock and the like will do licensing deals with the duopoly .. but Open Source systems will be sued for Copyright infringement.

The US government will be happy with all this : they can control the AI systems if required.

Anyway, that's the way I see things turning out.

19

blueSGL t1_j96kgc1 wrote

It's all fine and good being a benevolent company that decides it's going to fund (but not release) research.

Are the people actually developing this researches going to be happy grinding away at problems at a company and not have anything they've created shared?

and see another research institute gain kudos for something they'd already created 6months to a year prior but it's locked in the google vault?

5

TFenrir t1_j96l9m3 wrote

Yeah I think this is already playing out to some degree, with some attrition from Google Brain to OpenAI.

I don't know how much is just... Normal poaching and attrition, and how much is related to different ideologies, but I think Google will have to pivot significantly to prevent something more substantial happening to their greatest asset.

4

hydraofwar t1_j96r06y wrote

At the end of the day, either keeping AI for yourself or sharing it with the people is dangerous either way. But it's probably less dangerous to give access to the people than to keep it for the elite.

14

nillouise t1_j96sr78 wrote

I am also curious about this, but imo using AI to advance science is a wrong tech route, anyway, if DeepMind keep silence, they would better to make a big thing instead of just losing the game.

3

TFenrir t1_j96v7w5 wrote

It's too easy to look at people who don't give you what you want as monsters, but I think we do ourselves a disservice if we eschew nuance for thoughts that affirm our frustrations.

22

TFenrir t1_j971ped wrote

You're not displaying any ability to look at situations like this with nuance. It's extremely simplistic to look at the world like it's composed of good guys and bad guys, and you do yourself a disservice when you fall into that trap.

It's not dick-riding to say "maybe there are more complicated reasons that people want to be cautious about the AI they release other than being power hungry, mustache twirling villains".

As a creative exercise, could you imagine a reason that you may even begrudgingly agree with, that someone like Demis would have to hesitate to share their AI? If you can't, don't you think that's telling?

19

helpskinissues t1_j9730cv wrote

DeepMind is the worst enemy of Google. It seems most people think the fact that Google AI competes with DeepMind is just a coincidence. No. Google is consciously moving money from DeepMind to Google AI, because DeepMind is against corporation mindset.

1

Utoko t1_j97767s wrote

hm they probably expected the non-profit "Open"AI not completely switch around stop publishing papers and becoming a for profit company. (Usually in the past that isn't the norm or the purpose of a nonprofit)

We had a couple of years were the cooperations somehow realized that sharing their research advanced progress a lot faster. OpenAI will get the companies back to protectionism.

20

glaster t1_j978o82 wrote

Trust me, bro. (Follows a quick answer by AI).

During World War II, the US government formed the Manhattan Project, a top-secret research program dedicated to developing the world's first nuclear weapons. The papers published during this time on nuclear topics were often focused on the technical details of creating and using nuclear fission for military purposes.

One of the most important papers from this period was "The Production of Radioactive Substances by Neutron Bombardment" by Glenn T. Seaborg and Arthur C. Wahl. Published in 1945, this paper described the discovery and isolation of several new elements through neutron bombardment, including plutonium, which would later be used in the construction of the atomic bomb.

Another key paper from this time was "Theoretical Possibility of a Nuclear Bomb" by Edward Teller. This paper explored the feasibility of creating a nuclear bomb and outlined the basic principles behind its design.

Other papers from this period focused on the design and construction of nuclear reactors, such as "The Thermal Neutron in Reactors" by Enrico Fermi and "Nuclear Chain Reaction in Uranium and Thorium" by Eugene Wigner. These papers helped lay the foundation for the development of nuclear power.

However, not all papers from this time were focused solely on technical details. Some also explored the ethical implications of using nuclear weapons in warfare. One such paper was "The Social Responsibilities of the Scientist" by James Franck, which called on scientists to consider the potential consequences of their research and to take an active role in promoting peace.

Overall, the papers published on nuclear topics during the Manhattan Project were instrumental in advancing our understanding of nuclear science and technology, and in shaping the world we live in today.

4

TemetN t1_j97ctj9 wrote

Honestly, in contrast with a lot of people here I'm less certain this was against OpenAI specifically, but that's partially because OpenAI promptly went and said they were going to do the same thing. If anything, I'm more unnerved that it's a general movement away from sharing research - and we've seen the damage this song and dance does before. Frankly I'm disgusted with both OpenAI and DeepMind at this point.

2

visarga t1_j97eu47 wrote

All this elaborate scheme falls down in 3 months when we get a small scale, open sourced chatGPT model from Stability or others. There are many working on reproducing the dataset, code and models.

20

FirstOrderCat t1_j97i6py wrote

> Most of the technologies being used by openai are either from Google or from Deepmind.

it is just indication that google and deepmind create theoretical concepts but can't execute it to complete product.

4

bass6c t1_j97jd0e wrote

As if Google or Deepmind does not have or cannot buil models such as openai’s. As of now Google hold probably the most powerful language model in the world. Palm beat GPT models in every major beachmark. I’m not even talking about u-palm or flan palm (more advanced versions of palm).

3

FirstOrderCat t1_j97jovv wrote

> Palm beat GPT models in every major beachmark.

palm is much larger, which makes it harder to run in production serving many user's requests, so it is example of enormous waste of resources.

Also, current NLP benchmarks are not reliable, simply because models can be pretrained on them and you can't verify this.

5

Aggravating-Act-1092 t1_j97jrcu wrote

Yeah this. It seems unlikely that DeepMind is behind OAI from a science perspective. OAI has done more/better to monetise LLMs but between Sparrow, Chincilla, Gato and Flamingo DeepMind definitely appears to have a good grasp.

As mentioned already, Demis said they would be cutting back on publications, what we are seeing is just that.

13

MrEloi t1_j97jzfv wrote

And suppose all parts of such systems and related activities are declared illegal - or even terrorist devices?

The media are in the government's and big corporations' pockets .. I can just imagine the steady propaganda against "dangerous private AI" they could pump out.

3

Gagarin1961 t1_j97l74o wrote

> get the impression that fear of bad results is a big part of the anxiety people like Demis feel.

Then he shouldn’t be upset with chatGPT at all, as their product hasn’t produced a particularly “bad” result.

It’s been nothing but positive for millions. He was wrong, the time is right.

10

bass6c t1_j97mvsk wrote

This was a reply to your comment stating Google can’t convert theorical concept into an actual product. That’s not the case. The thing is Google isn’t interested into shipping costly llm to only then hurt their own business. It’s not about they can’t it’s they won’t.

1

FirstOrderCat t1_j97o5hw wrote

I think my point still stands:

- Google didn't ship LLM as product yet, and now forced to catch up because lost innovation race (even you think they are not interested lol)

- OpenAI shipped multiple generations of LLM products already

6

YobaiYamete t1_j97rvdp wrote

> Being a cynic, I think that OpenAI, Google (and the US government?) have done a deal. They will retain control of the AI platforms, thus becoming a duopoly.

Lol people say these things and don't realize that Amazon and Apple and Nvidia and the other big companies also have their own AI in the works, as well as, y'know, countries outside the US

The genie is out of the bottle, there's zero chance just two or three companies will get to keep it. Every billionaire worth their salt is focusing heavily on the AI field right now

9

YobaiYamete t1_j97sau0 wrote

More like "Here's exactly how to make a car that can run for 200,000 miles on one drop of water. I'm not going to make it though, because won't someone think of the poor oil barons?"

then

"ZOMG!!!! Someone made the car and is selling it for billions???"

It's baffling that Google has sat on the tech for so long, and fully justified that another upstart is castrating them after actually using it

14

helpskinissues t1_j97t384 wrote

https://www.businessinsider.com/deepmind-secret-plot-break-away-from-google-project-watermelon-mario-2021-9

DeepMind (Demis) is against corporation approaches. Google bought DeepMind and Demis later regretted that transaction. They're in a tense relationship, which explains why in the last years Alphabet has heavily invested in Google AI to separate themselves from DeepMind. Anyone that follows closely the AI news would know that Google is ignoring most DeepMind news. They don't even tweet about their progress, yet they tweet everything about Google AI.

They have two LLMs (Lambda and Sparrow), and the one that's going to be released on Google is Lambda, not Sparrow (DeepMind). DeepMind is a rebel inner research team inside Google. I wouldn't even say they're inside Google, they're not even in the same country.

12

MrEloi t1_j97x9gk wrote

>The genie is out of the bottle, there's zero chance just two or three companies will get to keep it. Every billionaire worth their salt is focusing heavily on the AI field right now

Agreed .. but these big firms will all do their darndest to 'tax' the population's use of AI.

2

nomorsecrets t1_j97ykqd wrote

Nukes are the only thing I can think to compare it to, even though I know it doesn't make sense.
Nuclear capability for every man, woman and child.

The threat of Mutually Assured Destruction will not hold up on a grand scale.

10

tangent26_18 t1_j98188y wrote

What good is the current AI? For private knowledge of curious citizens and education. Well the government owns education, basically, and the media will control the informing of citizens. I see this as an opportunity to bottleneck all of our new knowledge generation moreso than already exists with the university specialization system. We will all be informed by a centralized monopoly of knowledge owners. This will lead to a monopoly on our past, present and future. There may be less wars except between west and east…the final conflict. If we can all live globally in peace, then our survival will depend on this centralized knowledge center.

1

ChipsAhoiMcCoy t1_j983ecw wrote

Recently google came out and said they were no longer going to be sharing academic progress publicallu I thought?

1

uishax t1_j98a46e wrote

It sounds extremely dumb to fire your AI engineers at the beginning of 2023, when its plainly obvious the AI tsunami is about to hit. They have been employed for a decade now, when AI produced no economic returns, so no point laying off them now.

5

gosu_link0 t1_j98ae0y wrote

Except Deepmind and Demis were the ones to invent the technology behind chatGPT and made it available for free for others like openAI to copy.

Literally the opposite of "keeping it for himself"

5

DukkyDrake t1_j98doqz wrote

We're transitioning to the monetization phase of the journey. This is where we start building out AI services for any and everything under the sun that can generate enough revenue. By the turn of the decade, after all these distributed AI services have permeated all of human society, they will collectively be viewed as an AGI.

4

Superschlenz t1_j98leen wrote

>It seems unlikely that DeepMind is behind OAI from a science perspective

So it seems unlikely that Alphabet is not just pouring another $10B into DeepMind as Microsoft did with OpenAI?

Hahaha, just kidding. The people at DeepMind are so much more intelligent than the people at OpenAI, they can run all the new models perfectly inside their heads and don't need massive compute to verify and fix their buggy ideas (or hire a load of paid workers for RLHF).

−1

EasternBeyond t1_j98p2rv wrote

Google doesn't want to cannabalize its own bussiness, which is search advertising. They probably realized early on that LLM models will compete with their main profit generator, so they decided to not allocate a significant amount of capital into making it a publically available product.

6

TampaBai t1_j98pfta wrote

Yes, this fiasco reminds me of Steve Jobs' stealing the intellectual property of the GUI from Xerox. They (Xerox) were sitting on perfectly implementable technology, but didn't seem to think there was any need for ordinary consumers to use such an interface. Jobs evidently never signed any kind of confidentiality agreement as Xerox assumed his intentions for touring the facility were for educational proposes. Soon thereafter, Jobs pilfered Xerox's technology -- and the rest is history, as we all are accustomed to using what became the "mouse". I hope there are others who, like Jobs, will do what it takes to get this tech into all of our hands as soon as possible.

6

hold_my_fish t1_j99gwpn wrote

Wow, what you've noticed about DeepMind's blog is quite striking. To have a two-months-and-counting blackout there is strange.

5

Yuli-Ban t1_j9a02eh wrote

https://en.wikipedia.org/wiki/Soviet_atomic_bomb_project#World_War_II_and_accelerated_feasibility

> In 1940–42, Georgy Flyorov, a Russian physicist serving as an officer in the Soviet Air Force, noted that despite progress in other areas of physics, the German, British, and American scientists had ceased publishing papers on nuclear science. Clearly, they each had active secret research programs

3

PoliteThaiBeep t1_j9acefq wrote

When powerful call all the shots it shifts wealth dramatically towards the elite and away from the public reducing quality of life and innovation.

It would also mean any friends and family of powerful would hold the keys to major industry sectors and companies and wouldn't let anyone new in. So encumbents can never be overthrown by a new business (blockbusters -> Netflix)

This is exactly what Russia is - Putin holds all the power and whenever new company comes up who does things in innovative way forcing incumbents out - like Yandex, Vk, Tinkoff and many others - he'd either buy them out for cheap (Yandex) or if it's not successful, threaten, publicly defame on state TV and force CEO out of the country, forcing him to sell for pennies (Vk, tinkoff). All of these companies belong to Putin friends via one or another scheme.

And when you look at the map and export data by country and you wonder how despite such a massive stream of wealth from oil and gas, yet Russian people have the worst quality of life in Europe (tied with Ukraine and Belarus). Many countries have nothing and yet hold significantly better quality of life (Estonia, Singapore, etc)

Basically if you look at a country where some guy/girl who was nobody was allowed to force a powerful corporation out through their innovation and ingenuity - that's a good sign that democracy is working there.

Of course it's not black and white it's a spectrum. If we look at any society decades and hundreds of years ago, their best societies would look far worse than most today, and their worse society would be far worse than north Korea today.

Still it's obvious that more democracy means more progress and, faster innovation, better quality of life and reduced power of the wealthy.

1

nomorsecrets t1_j9ad5fu wrote

Yeah well too bad. They don't have a choice in the matter and that should be crystal clear to them.
They have no one to blame but themselves.
Cannibalization and adaptation is their one and only move.

1

nomorsecrets t1_j9adikm wrote

ClosedAI also opened pandoras box so they do deserve some credit for getting the ball rolling.

Now it will likely be on the next OpenAI to release the next milestone advancement unless OpenAI can strike twice with GPT-4 and move the medium forward passed the ChatGPT plus web search capability of new Bing.

I have no faith in Google doing it, if that isn't clear. I hope they prove me wrong though.

1

Aggravating-Act-1092 t1_j9bk0x0 wrote

That’s a good point. There accounts for 2021 are here:

https://find-and-update.company-information.service.gov.uk/company/07386350/filing-history/MzM1NDYzODM1NmFkaXF6a2N4/document?format=pdf&download=0

That works out at just under 2 billion USD in 2021. Given their own and the industry trend we can probably assume 2022 is higher and 2023 will be higher still.

OAI gave no timeline over which their 10B injection will be spent over, but presumably more than 1 or 2 years. So these two are definitely in the same league.

2