Viewing a single comment thread. View all comments

acutelychronicpanic t1_je9dxaa wrote

Yes! This is exactly what is needed.

Concentrated development in big corps means few points of failure.

Distributed development means more mistakes, but they aren't as high-stakes.

That and I don't want humanity forever stuck on whatever version of morality is popular at Google/Microsoft or the Military.

275

Trackest t1_je9imnr wrote

AI seems to be developing too fast and provide too much potential profit to corporations. I am doubtful that CERN or ITER-like regulatory frameworks can effectively become the leading edge of AI research without some kind of drastic merging of OpenAI, DeepMind, etc into the organization, which would be practically impossible.

However, I do agree that if it were possible for every leading AI lab to be suddenly merged into one entity, an open international effort would probably be the best model.

45

acutelychronicpanic t1_je9ks0m wrote

Here is why I respectfully disagree:

  1. It is highly improbable that any one attempt at alignment will perfectly capture what humans value. For starters, there are at least hundreds of different value systems that people hold across many cultures.

  2. The goal should not be minimizing the likelihood of any harm. The goal should be minimizing the chances of a worst-case scenario. The worst case isn't malware or the fracturing of society or even wars. The worst case is extinction/subjugation

  3. Extinction/subjugation is far less likely with a distributed variety of alignment models than with one single model. With a single model, the creators could do a bait and switch and become like gods or eternal emperors with the AI aligned to them first and humanity second. Or they could just get it wrong. Even a minor misalignment becomes a big deal if all power is concentrated in one model.

  4. If you have hundreds of attempts at alignment that are mostly good faith attempts, you decrease the likelihood that they share the same blindspots. But it is highly likely that they will share a core set of ideals. This decreases the chances of accidental misalignment for the whole system (even though the chances of having some misaligned AI increases).

Sorry for the wall of text, but I feel that this is extremely important for people to discuss. I want you to tear apart the reasoning if possible because I want us to get this right.

52

Trackest t1_je9mlrd wrote

First off I do agree that in the ideal world, AI research continues under a European-style, open source and collaborative framework. Silicon valley companies in the US are really good at "moving fast and breaking things" which is why most of the AI innovation is happening in the US currently. However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.

Unfortunately there are a couple points that may make this unfeasible in reality.

  • Unlike with nuclear fusion or theoretical physics where profitability and application potential is extremely low during the R&D phase, every improvement in AI that brings us closer to AGI has extreme potential profits in the form of automating more and more jobs. Corporations have no motive to give up their AI research to a non-profit international organization besides the goodness of their hearts.
  • AGI and Proto-AGI models are huge national security risks that no nation-state would be willing to give up.
  • Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.

If we can somehow convince all the top AI researchers to quit their jobs and join this LAION initiative that would be awesome.

14

acutelychronicpanic t1_je9qay6 wrote

I don't mean some open-source ideal. I mean a mixed approach with governments, research institutions, companies, megacorporations all doing their own work on models. Too much collaboration on Alignment may actually lead to issues where weaknesses are shared across models. Collaboration will be important, but there need to be diverse approaches.

Any moratorium falls victim to a sort of prisoner's dilemma where only 100% worldwide compliance helps everyone, but even one group ignoring it means that the moratorium hurts the 99% participants and benefits the 1% rogue faction. To the extent that Apocalypse isn't off the table if that happens.

Its a knee-jerk reaction.

The strict and controlled research is impossible in the real world and, I think, likely to increase the risks overall due to only good actors following it.

The military won't shut its research down. Not in any country except maybe some EU states. We couldn't even do this with nukes and those are far less useful and far less dangerous.

16

Trackest t1_je9s80s wrote

Right, taking into account real-world limitations perhaps your suggestion is the best approach. A world-wide moratorium is impossible.

Ideally reaching AGI is harder than we think, so the multiple actors working collaboratively have time to share which alignment methods work and which do not like how you described. I agree that having many actors working on alignment will increase probability of finding a method that works.

However with the potential for enormous profits and the fact that the best AI model will reap the most benefits, how can you possibly ensure these diverse organizations will share their work, apply effective alignment strategies, and not race to the "finish"? Getting everyone to join a nominal "safety and collaboration" organization seems like a good idea, but we all know how easily lofty ideals collapse in the face of raw profits.

3

acutelychronicpanic t1_je9ttym wrote

The best bet is for the leaders to just do what they do (being open would be nice, but I won't hold my breath), and for at least some of the trailing projects to collaborate in the interest of not being obsolete. The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much. Personally, I want to see everyone get to do whatever they want with their lives. Lots of folks are into that.

Edit & Quick Thought: Being rich wouldn't hold a candle to being one of the OG developers of the system which results in utopia. Imagine the clout. You could make t-shirts. I'll personally get a back tattoo of their faces. Bonus, there's every chance you get to enjoy it for.. forever? Aging seems solvable with AGI.

If foundational models become openly available, then people will be working more on fine-tuning which seems to be much cheaper. Ideally they could explicitly exclude the leading players in their licensing to reduce the gap between whoever is first and everyone else, regardless of who is first. (But I'm not 100% on that last idea. I'll chew on it).

If we all have access to very-smart-but-not-AGI systems like GPT-4 and can more easily make narrow AI for cybersecurity, science, etc. Then even if the leading player is 6 months ahead, their intelligence advantage may not be enough to allow them to leverage their existing resources to dominate the world, just get very rich. I'm okay with that.

4

Caffdy t1_jebfvjx wrote

> The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much

This phrase, this phrase alone say it all. Getting rich and all the profits in the world won't matter when we will be a inch-step close to extintion; from AGI to Super Artificial Intelligence it won't take long; we are a bunch of dumb monkeys fighting over a floating piece of dirt in the blackness of space, we're not prepared to understand and undertake on the risks of developing this kind of technology

−1

Borrowedshorts t1_je9zb9x wrote

ITER is a complete joke. CERN is doing okay, but doesn't seem to fit the mold of AI research in any way. There's really no basis for holding these up as the models AI research should follow.

5

Trackest t1_jea2k7c wrote

Yes I know these projects are bureaucratically overloaded and extremely slow in progress. However they are some of the only examples we have of actual international collaboration at a large scale. For example ITER has US, European, and Chinese scientists working together on a common goal! Imagine that!

This is precisely the kind of AI research we need, slow progress that is transparent to everyone involved, so that we have time to think and adjust.

I know a lot of people on this sub can't wait for AGI to arrive tomorrow and crown GPT as the new ruler of the world. They reflexively oppose anything that might slow down AI development. I think this discourse comes from a dangerously blind belief in the omnipotence and benevolence of ASI, most likely due to lack of trust in humans stemming from the recent pandemic and fatalist/doomer trends. You can't just wave your hands and bet everything on some machine messiah to save humanity just because society is imperfect!

I would much rather prefer we make the greatest possible effort to slow down and adjust before we step into the event horizon.

−2

Borrowedshorts t1_jeabhvm wrote

ITER is a complete disaster. If people thought NASA's SLS program was bad, ITER is at least an order of magnitude worse. I agree AI development is going extremely fast. I disagree there's much we can do to stop it or even slow it down much. I agree with Sam Altman's take, it's better these AI's to get into the wild now, while the stakes are low, than to have to experience that for the first time when these systems are far more capable. It's inevitable it's going to happen, it's better to make our mistakes now.

8

Smellz_Of_Elderberry t1_jebrrey wrote

>However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.

This is going to lead to us waiting decades for progress and testing. Look at drug development.. Takes decades of clinical trials for us to even start making it available, and then it's prohibitively expensive. We might have cured cancer already, If we didn't have so many barriers in the way.

>Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.

So you want an unelected international body to hold the keys to the most powerful technology in existence? That sounds like a terrible idea. Open source is the only solution to alignment, because it will make the power available to all. Thus allowing all the disparate and opposing ideological groups the ability to, in a custom manner, align ai to themselves.

All an international group will do, is align ai in a way that maximizes the benefit of all parties involved. Parties which really have no incentive to actually care about you or i.

3

Smallpaul t1_jec8qy8 wrote

Your mental model seems to be that there will be a bunch of roughly equivalent models out there with different values, and they can compete with each other to prevent any one value system from overwhelming.

I think it is much more likely that there will exist one, single lab, where the singularity and escape will happen. Having more such labs is like having a virus research lab in every city of every country. And like open sourcing the DNA for a super-virus.

3

acutelychronicpanic t1_jecoprq wrote

I My mental model is based on this:

Approximate alignment will be much easier than perfect alignment. I think its achievable to have AI with superhuman insight and be well enough aligned that it would take deliberate prodding or jailbreaking to get it to model malicious action. I would argue that in many domains, GPT-4 already fits this description.

Regarding roughly equivalent models, I think that there is an exponential increase in intelligence required to take action in the world as you attempt to do more complicated things or act further into the future. My intuition is based on the complexity of predicting the future in chaotic systems. Society is one such system. I don't think 10x intelligence will necessarily lead to 10x increase in competence. I strongly suspect we underestimate the complexity of the world. This may buy us a lot of time by decreasing the peaks in the global intelligence landscape to the extent that humans utilizing narrow AI and proto-AGI may have a good chance.

I do know that regardless of if the AI alignment issue can be solved, the largest institutions currently working on AI are not well aligned with humanity as institutions. Especially the ones that would continue working despite a global effort to slow AI cannot be trusted.

I'm willing to read any resources you want to point me to, or any arguments you want to make. I'd rather be corrected if possible.

1

PurpedSavage t1_jeba5qb wrote

Given ur assumptions are true, ur analysis is completely correct. Correct me if I’m wrong tho, but I think ur assuming that LAION wants to disband all other AI projects an monopolize the AI framework. I think this isn’t a correct assumption. They merely want to add on to the existing decentralized network of AI models, and create a stronger framework of checks and balances all the development of AI. By involving experts from every country, and providing increased transparency. Its a response to the black box OpenAI, Google, and Amazon have put up. They put this black box up so they can keep their research and trade secrets hidden.

1

acutelychronicpanic t1_jebavnh wrote

Quite the opposite. I support these systems being open sourced. I am against the bans being proposed by others in the public.

3

Cr4zko t1_je9t9x7 wrote

CERN's sketchy as fuck if you ask me. Weren't they those guys that did rituals for some reason?

−12

agonypants t1_jea5bfr wrote

Quite frankly, I trust the morality of Google/Microsoft/OpenAI far more than I do the morality of our pandering, corrupt, tech-illiterate "leaders."

7

acutelychronicpanic t1_jea7ze0 wrote

I agree, but those aren't the only two choices.

15

FaceDeer t1_jeaiuod wrote

Indeed, there's room for every approach here. We know that Google/Microsoft/OpenAI are doing the closed corporate approach, and I'm sure that various government three-letter agencies are doing their own AI development in the shadows. Open source would be a third approach. All can be done simultaneously.

3

ninjasaid13 t1_jebjax5 wrote

>Quite frankly, I trust the morality of Google/Microsoft/OpenAI far more than I do the morality of our pandering, corrupt, tech-illiterate "leaders."

are you talking about U.S. leaders or leaders in general?

0

agonypants t1_jebqpvr wrote

Specifically I'm thinking of the half of US Congress that believes drag queens and Hunter Biden's laptop are our number one threats. Ya know...idiots.

7

raika11182 t1_jead4pz wrote

Open-source AI software is crucial for ensuring that all companies have access to these technologies without having to pay exorbitant fees or licensing costs, and it also helps ensure a level playing field where small startups can compete with large corporations. It's possible that a closed source tool may be more powerful for some time, but having something with an open source basis for everyone else keeps a free / low cost alternative in the running.

6

HeBoughtALot t1_jebrefx wrote

When I think about points of failure, I immediately think of the brittleness of a system, but in this context, it can result in too much power in too few hands, another type of failure.

2

acutelychronicpanic t1_jebtswp wrote

Yes. Its not just the alignment of AI with its creator that is an issue. Its the alignment of the creator to humanity as a whole.

2

Merikles t1_jeeoj55 wrote

I think this strategy is suicidal

0

acutelychronicpanic t1_jeep4kf wrote

More so than leaving this to closed door groups that can essentially write law for all humanity through their AI's alignment?

And that's assuming they solve the alignment problem. We need more eyes on the problem 30 years ago.

1

Merikles t1_jeephe5 wrote

Not more so; equally. Both strategies very likely result in human extinction, imho.

1

acutelychronicpanic t1_jeeutdg wrote

Do you have any suggestions?

1

Merikles t1_jeew9n5 wrote

Yes, I think that a joined "AI Manhattan project" between all major countries in combination with a global moratorium on AI research beyond current levels, enforced through a combination of methods including hardware regulations is the most realistic path to (likely) survival.
I am aware that it is unlikely to play out this way, but I still think this is the most realistic scenario that isn't a completely Hail-Mary gambling with everyone's life.

This isn't realistic now, but it might become realistic if we begin preparing it.
Enforcing regulations on OpenAI today would probably buy us a bit of time, either for preparing this solution, finding new solutions in AI alignment, or a new strategic general approach.

1

acutelychronicpanic t1_jef28jo wrote

I think we are past that. It would maybe have worked 10 years ago..

My concern is that even the models less powerful than ChatGPT (which can be run on a single pc), can be linked up as components into systems which could achieve AGI. Raw transformer based LLMs may actually be safer than this because they are so alien that they don't even appear to have a single objective function. What they "want" is so context sensitive that they are more like a writhing mass of inconsistent alignments - a pile of masks - this might be really good for us in the short term. They aren't even aligned with themselves. More like raw intelligence.

I also think that approximate alignment will be significantly easier than perfect alignment. We have the tools right now, this approximate alignment is possible. Given the power combined with lack of agency of current LLMs, we may surpass AGI without knowing it. The issue of course is someone just has to set it up to put on the mask of a malevolent or misaligned AI. Thats why I'm worried about concentrating power.

I'll admit I'm out of my depth here, but looking around, so are most of the actual researchers.

0