Viewing a single comment thread. View all comments

CollapseKitty t1_j8ggeex wrote

From the most recent interview I heard, Altman's plan for alignment was roughly, "Hopefully other AI figures it out along the way *shurg*".

I haven't heard him sufficiently refute any of Eliezer's more fundamental arguments, nor provide any real rational beyond, hopefully it figures itself out, which our entire history with machine learning indicates is unlikely, at least on the first and only try we get at AGI.

As other's point out, Altman's job is to push, hype and race toward AGI. Why would we trust his assessments when painting a bright future is in his immediate interests? Especially when they are based on next to nothing.

Ultimately, the challenge isn't necessarily that alignment is impossible, or even insanely hard (though it appears to be from every perspective), but that our methodology of developing new tech is trail and error, and we only have 1 try at successful alignment. This is vastly exacerbated with the unfathomable payoff and ensuing race to reach AGI, as it offers a first-to-the-post wins everything payout.

You could say the real alignment problem is with getting humanity to take a safe approach and collectively slow down, which obviously gets more and more difficult as the technology proliferates and becomes more accessible.

12

Frumpagumpus t1_j8gkbvm wrote

the loop for ai to do recursive self improvement is a very very long supply chain unless it can get very far with just algorithmic improvements.

so i dont see why we shouldnt just assume the less hardware overhang the better,

which would pretty much mean we should go as fast as possible

3

CollapseKitty t1_j8gmpw4 wrote

We simply don't know.

AlphaZero became incomparably better than the sum total of all of humans over all of history at GO within 8 hours of self play.

AlphaFold took several months, and help, but was able to solve an issue thought to be impossible by humans.

The risk of assuming that a sufficiently advanced agent won't be able to self-scale, at least into something beyond our ability to intervene in, is incalculable.

If we have a 50% chance of succeeding in alignment if we wait 30 years, but a 5% chance if we continue at the current pace, isn't the correct choice obvious? Even if it's a 90% chance of success at current rates (the opposite is far more likely) why risk EVERYTHING when waiting could even marginally increase chances?

The payout is arbitrarily large as is the cost of failure. Every iota of extra chance is incomprehensibly valuable.

Unless you're making the argument from a personal perspective (I want to see AGI before I die) or you value the progress of intelligence at the cost of all other life, you should be in favor of slowing things down.

6

Frumpagumpus t1_j8go4k2 wrote

you'll have to convince tsmc, intel, all the other fabs and the govts of usa, china, europe, india, russia, and, if talking about 30 yrs, maybe nigeria, indonesia, malaysia, and a few others before you can convince me is all I'm saying

risk of nuclear war or other existential catastrophe is also non zero.

4

CollapseKitty t1_j8gvkye wrote

It's purely hypothetical unfortunaly. You're right that we are actively barreling toward uncontrollable systems and there is likely nothing, short of global catastrophe/nuclear war, that can shift our course.

I stand by the assessment and that we should acknowledge that our current path is basically mass suicide. For all of life.

The ultimate tragedy of the commons.

3

bildramer t1_j8htxr5 wrote

I think hardware overhang is already huge, there's no point in being risky only to make AI "ludicrously good/fast" instead of "ludicrously good/fast plus a little bit". Also, algorithms that give you AGI are so simple evolution could find one.

2