acutelychronicpanic

acutelychronicpanic t1_je9ri9i wrote

Use LLMs every day. Use it to plan your meals. Use it to help with personal problems. Use it to feed your curiosity.

You'll build an intuition of how they work and you'll be quite valuable during the transitional period where we have AI but not all companies have adopted it to their systems.

Of course trade school, construction, etc are all viable. But you can do both if you want.

*standard disclaimer for all advice that if it ruins your life it's all your fault for listening to a stranger on the internet.

2

acutelychronicpanic t1_je9qay6 wrote

I don't mean some open-source ideal. I mean a mixed approach with governments, research institutions, companies, megacorporations all doing their own work on models. Too much collaboration on Alignment may actually lead to issues where weaknesses are shared across models. Collaboration will be important, but there need to be diverse approaches.

Any moratorium falls victim to a sort of prisoner's dilemma where only 100% worldwide compliance helps everyone, but even one group ignoring it means that the moratorium hurts the 99% participants and benefits the 1% rogue faction. To the extent that Apocalypse isn't off the table if that happens.

Its a knee-jerk reaction.

The strict and controlled research is impossible in the real world and, I think, likely to increase the risks overall due to only good actors following it.

The military won't shut its research down. Not in any country except maybe some EU states. We couldn't even do this with nukes and those are far less useful and far less dangerous.

16

acutelychronicpanic t1_je9mzk9 wrote

The problem is that its impossible. Literally impossible. To enforce this globally unless you actively desire a world war plus an authoritarian surveillance state.

Compact models running on consumer PCs aren't as powerful as SOTA models obviously, but they are getting much better very rapidly. Any group with a few hundred graphics cards may be able to build an AGI at some point in the coming decades.

6

acutelychronicpanic t1_je9ks0m wrote

Here is why I respectfully disagree:

  1. It is highly improbable that any one attempt at alignment will perfectly capture what humans value. For starters, there are at least hundreds of different value systems that people hold across many cultures.

  2. The goal should not be minimizing the likelihood of any harm. The goal should be minimizing the chances of a worst-case scenario. The worst case isn't malware or the fracturing of society or even wars. The worst case is extinction/subjugation

  3. Extinction/subjugation is far less likely with a distributed variety of alignment models than with one single model. With a single model, the creators could do a bait and switch and become like gods or eternal emperors with the AI aligned to them first and humanity second. Or they could just get it wrong. Even a minor misalignment becomes a big deal if all power is concentrated in one model.

  4. If you have hundreds of attempts at alignment that are mostly good faith attempts, you decrease the likelihood that they share the same blindspots. But it is highly likely that they will share a core set of ideals. This decreases the chances of accidental misalignment for the whole system (even though the chances of having some misaligned AI increases).

Sorry for the wall of text, but I feel that this is extremely important for people to discuss. I want you to tear apart the reasoning if possible because I want us to get this right.

52

acutelychronicpanic t1_je9gsif wrote

INT. TITANIC - DECK - NIGHT

Panic-stricken passengers are running in every direction. A mother is clutching her child, and people are pushing each other to get on lifeboats. Water is gushing onto the deck.

PASSENGER 1 (screaming) We're going down! We're all going to die! Amidst the chaos, CAPTAIN SMITH steps forward and raises his hands to calm the crowd.

CAPTAIN SMITH (firmly) Everyone, please, listen to me! I understand your fear, but there's no need to panic. The crowd quiets down, turning their attention to the captain.

CAPTAIN SMITH (continuing) Throughout history, every time the water level has risen, there has always been more boat to climb. We may not see it now, but there could be even more boat to climb that we can't imagine.

PASSENGER 2 (uncertain) But, Captain... the ship is sinking!

CAPTAIN SMITH (smiling reassuringly) Trust me. We'll find a way to climb higher. We always do.

2

acutelychronicpanic t1_je9f9q0 wrote

Understanding, as it is relevant to the real world, can be accurately measured by performance on tasks.

If I ask you to design a more efficient airplane wing, and you do, why would I have any reason to say you don't understand airplane wings?

Maybe you don't have perfect understanding, and maybe we understand it in different ways.

But to do a task successfully at a high rate, you would have to have some kind of mental/neural/mathematical model internal to your mind that can predict the outcome based on changing inputs in a way that is useful.

That's understanding.

1

acutelychronicpanic t1_je9dxaa wrote

Yes! This is exactly what is needed.

Concentrated development in big corps means few points of failure.

Distributed development means more mistakes, but they aren't as high-stakes.

That and I don't want humanity forever stuck on whatever version of morality is popular at Google/Microsoft or the Military.

275

acutelychronicpanic t1_je15hcf wrote

Well, I would think that companies would love nothing more than to hand Microsoft their employees' usage data to train them a fine-tuned model. At least at large enough companies.

As far as decisions, it doesn't have to make the. Just present the top 3 with citations of company policy along with its reasoning and the pros and cons. It can pretty much do this now with GPT4 if you feed it relevant info.

1

acutelychronicpanic t1_je13w4s wrote

For people that work in at risk jobs, learning how to leverage AI is a short term solution for the next few years. That and skilled/semiskilled physical work. Its harder to build a robot than to download a software update.

I would just get GPT4 and talk to it every day to build an intuition of what it can do and how to guide it to do what you want.

Past that, I don't know

6