Submitted by kdun19ham t3_111jahr in singularity
CollapseKitty t1_j8gmpw4 wrote
Reply to comment by Frumpagumpus in Altman vs. Yudkowsky outlook by kdun19ham
We simply don't know.
AlphaZero became incomparably better than the sum total of all of humans over all of history at GO within 8 hours of self play.
AlphaFold took several months, and help, but was able to solve an issue thought to be impossible by humans.
The risk of assuming that a sufficiently advanced agent won't be able to self-scale, at least into something beyond our ability to intervene in, is incalculable.
If we have a 50% chance of succeeding in alignment if we wait 30 years, but a 5% chance if we continue at the current pace, isn't the correct choice obvious? Even if it's a 90% chance of success at current rates (the opposite is far more likely) why risk EVERYTHING when waiting could even marginally increase chances?
The payout is arbitrarily large as is the cost of failure. Every iota of extra chance is incomprehensibly valuable.
Unless you're making the argument from a personal perspective (I want to see AGI before I die) or you value the progress of intelligence at the cost of all other life, you should be in favor of slowing things down.
Frumpagumpus t1_j8go4k2 wrote
you'll have to convince tsmc, intel, all the other fabs and the govts of usa, china, europe, india, russia, and, if talking about 30 yrs, maybe nigeria, indonesia, malaysia, and a few others before you can convince me is all I'm saying
risk of nuclear war or other existential catastrophe is also non zero.
CollapseKitty t1_j8gvkye wrote
It's purely hypothetical unfortunaly. You're right that we are actively barreling toward uncontrollable systems and there is likely nothing, short of global catastrophe/nuclear war, that can shift our course.
I stand by the assessment and that we should acknowledge that our current path is basically mass suicide. For all of life.
The ultimate tragedy of the commons.
Viewing a single comment thread. View all comments