Submitted by stringShuffle t3_127asin in MachineLearning

https://www.openpetition.eu/petition/online/securing-our-digital-future-a-cern-for-open-source-large-scale-ai-research-and-its-safety

>Join us in our urgent mission to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. This monumental initiative will secure our technological independence, empower global innovation, and ensure safety, while safeguarding our democratic principles for generations to come.

469

Comments

You must log in or register to comment.

Disastrous_Elk_6375 t1_jedf0de wrote

This is a great initiative! Let's just hope that it doesn't go the way of that classic joke: the french will build it, the germans will make it funny, the british will teach it about food, etc...

81

glichez t1_jedihz8 wrote

collectively "leading the bull" is a much more reasonable approach than trying to stop research across the planet...

147

m98789 t1_jedsr97 wrote

Sounds like a buy signal for $NVDA

9

zoupishness7 t1_jedsv9s wrote

Can we just skip some steps and collect worldwide taxes to help train a new government?

10

tripple13 t1_jedt3cb wrote

Now that's a petition I can stand for.

Democratization of LLMs and its derivatives, is in fact, the AI safe way - Counterintuitive as it may sound to the AI DEI folks.

64

MrFlufypants t1_jedtiuv wrote

My first question too. What’s to stop OpenAI from “partnering with” a small startup they “definitely don’t own” and giving them the money/S tier research to monopolize this thing’s use by hitting their priority matrix correctly? Stick said company in Ghana and they can play the 3rd world card too. And if you make that impossible by sharing access easily, I doubt anybody will have enough timeshares to train a Large model. Hope I’m wrong, but I’ve become a bit cynical lately about companies not being greedy bastards

19

gahblahblah t1_jedwifr wrote

This could be it - true open AI. Maybe this could be the answer to AI alignment and democratising AI - empowering humanity as a whole. Disarming the arms race, and working in cooperation.

34

ZetaReticullan t1_jedzbd5 wrote

!r/UsernameChecksOut

You're far to intelligent for that handle. No, you're NOT WRONG. What you wrote, is EXACTLY what would happen, because companies are not going to sit on their laurels and watch their opportunity to dominate the world, snatched away without a "fight" (fair or unfair).

−10

Tostino t1_jee40if wrote

The possibility of creating an autonomous agent with current level hardware is not as far-fetched as it may seem. A single adept engineer could conceivably construct such an agent by amalgamating insights from disparate papers that have been divulged in the field of artificial intelligence. These papers may contain novel algorithms, techniques, or architectures that could be integrated into a coherent and functional system. Moreover, the open source implements that are available today, such as langchain/flow and pinecone db (or similar), could provide the necessary tools and frameworks to assemble an architecture that is self augmenting and self refining. Such an architecture could leverage the power of distributed computing, natural language processing, and machine learning to improve its own performance and capabilities over time. This could potentially enable the agent to surpass the optimal human capacities at most undertakings, or at least match them.

−2

light24bulbs t1_jee9clk wrote

Please everyone, go through to the link and sign it

7

light24bulbs t1_jee9lvm wrote

I agree with you. Looking at papers like ToolFormer and so on, we are very close.

We are only a couple years away from AGI, which is what I've been saying for YEARS and getting yelled at here. The WaitButWhy article in 2016 was dead right

−5

nomadiclizard t1_jeebzqf wrote

What's the point, when we know that if it discovers anything revolutionary related to AGI, it'll be locked down, the model will be closed for 'safety evaluation' and will never see the light of day. Nothing 'open' in AI is actually open, as soon as a whiff of AGI arrives.

2

Carrasco_Santo t1_jeeiu81 wrote

Currently, due to so much vested interest, I am suspicious of "too good" initiatives led by a group of "very virtuous" people with the aim of "democratizing technology".

4

spaceleviathan t1_jeemtn8 wrote

Not to be too irreverent but I like that we are exponentially creeping towards realizing Asher’s Earth-Central

1

wise0807 t1_jeer2sl wrote

Who will be funding such a initiative?

2

tripple13 t1_jeetdjn wrote

What? How do you read that from my text?

I think most of them probably cares, just as much I'd asume you and I do, about how the next number of years play out for the benefit of man.

12

AllowFreeSpeech t1_jeevp3b wrote

What bothers me is that most researchers don't care to use any model compression or efficiency techniques. They want others to pay for their architectural inefficiencies. IMO such funding could be a bad idea if it were to stop competition of neural architectures, and a good idea otherwise.

For example, is matrix-matrix multiplication necessary or can matrix-vector multiplication do the job? Similarly, are dense networks necessary or can sparse networks do the job? Alternatively the funding can go toward the engineering of optical and analog hardware that is significantly more power efficient.

3

Scew t1_jef616t wrote

If a corporation were trying to depreciate an open-source alternative to one of their projects, it might look like spreading negative propaganda about the open-source alternative or highlighting the perceived weaknesses of the alternative. For example:

FUD: The corporation may spread Fear, Uncertainty, and Doubt (FUD) about the open-source alternative, such as by suggesting that it is not secure, reliable, or compatible with other systems.

Highlighting perceived weaknesses: The corporation may highlight perceived weaknesses of the open-source alternative, such as by emphasizing areas where it falls short compared to the corporation's proprietary solution.

Undermining community support: The corporation may attempt to undermine community support for the open-source alternative by spreading misinformation about the project's development or suggesting that it lacks the necessary resources to succeed.

Offering alternative solutions: The corporation may offer alternative solutions that they claim are superior to the open-source alternative, such as by highlighting their own proprietary products or services.

Funding competitors: The corporation may fund competitors who are developing similar solutions to the open-source alternative, with the intention of creating negative publicity or drawing attention away from the alternative.

These tactics can be effective in diminishing support for the open-source alternative, but they can also be perceived as unethical and manipulative, potentially damaging the corporation's reputation and relationship with the open-source community.

2

Scew t1_jef6kiu wrote

Lol they opened up their AI to be trained for free for "research purposes..." Sounds similar to how certain corporations greatly profited from recent current events over the passed couple of years... Wonder if they'll even go as far as calling people some kind of hero for helping them make a bigger profit >.>

4

kulchacop t1_jefjgf9 wrote

Origins of Communism without human governance /s

0

lacker t1_jefpz2c wrote

I’m a big fan of open source AI research, but creating a new facility doesn’t seem like the way to go. If you’re making a GPU cluster that has to be shared among a bunch of different academic groups, you’ll have to build resource-sharing software, infrastructure tools, etc, and spend all this money on what is essentially an AWS clone.

Wouldn’t it be more effective to simply give this money to AI research groups and let them buy infrastructure from the most cost-effective provider? If AWS works best, fine, if it’s some smaller infrastructure provider, that’s fine too.

This proposal seems like it would actually divert money away from AI, by spending a lot of money rebuilding the standard cluster infrastructure stuff that cloud providers already have.

9

darthmeck t1_jeg46cc wrote

I don’t know how they’d go about doing this but there need to be provisions that it can never become a for-profit agency. OpenAI gained traction by doing cutting-edge research and touting it as open to the public (or at least researchers) and then pulled the rug out from under everyone when they struck gold. In case the LAION discovers a new architecture that dwarfs the capability of LLMs, they should never be able to say “ok time to start a company and mint billions now!”.

7

toothpastespiders t1_jeg98nb wrote

I'm already getting a little frustrated by how many things promoted as open source use openai. I get that there's some wiggle room with terminology. But it's often on the level of just having a shell script built on top of a binary and calling it open source because you can edit the launcher.

I'm absolutely fine with openai doing its thing. I'm grateful for it in fact. But I really hate how much it's muddying the waters.

2

nateharada t1_jeh5bir wrote

I personally feel we need large scale collaboration, not each lab having a small increase. Something like a James Webb telescope or a CERN. If they make a large cluster that's just time shared between labs that's not as useful IMO as allowing many universities to collaborate on a truly public LLM that competes with the biggest private AI organizations.

5