Submitted by QuartzPuffyStar t3_126wmdo in singularity
Gotisdabest t1_jed8z1b wrote
Reply to comment by AlFrankensrevenge in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Can you guarantee that will occur? The best odds we have right now is to accelerate and focus of raising awareness in institutions to prepare for it better, and hope that we win the metaphorical coin toss and it's aligned or benevolent. But right now a pause is just handing away a strong lead to whoever the least ethical parties are, based on naive notions of human idealism or based on pure selfish interest. I think the reasearchers are the former and the businessmen are the latter.
AlFrankensrevenge t1_jeefdr0 wrote
The whole point is that the naysayers have be able to guarantee it will NOT occur. If there is a 10% risk of annihilation, isn't that enough to take this seriously? Even a 1% chance? You'd just do nothing because 1% doesn't seem very high?
You mentioned a coin toss. I basically agree with that metaphor. Because there is so much uncertainty in all this, and we don't know what we don't know about AGI, we should treat a human apocalypse as a 50-50 chance. How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.
Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.
There is a long history of "researchers" thinking too much about how quickly they can produce the cool new thing they invent and not enough about long-term consequences. Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.
Gotisdabest t1_jeeobsb wrote
>You'd just do nothing because 1% doesn't seem very high?
Yes, absolutely. When the alternative isn't necessarily even safer and has clear arguments for being unsafer. You haven't used it, but a lot of people give the example of going on a plane with a 10% chance of failing. And yes, nobody is dumb enough to go an plane which has that much of a chance of crashing. However... This is not any ordinary plane. This is a chance for unimaginable and infinite progress, an end to the vast majority of pressing issues. If you asked people on the street whether they'd board a plane with a 10% chance of crashing if it meant a solution to most of their problems and the problems of the people they care about, you'll find quite a few takers.
>How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.
As you say, we don't know how much alignment will really affect the result. However, I do know what an aligned model made for a dictatorship or a particularly egomanical individual will look like and what major risks that could pose. Why should we increase the likelihood of a guaranteed bad outcome in order to fight a possibly bad outcome.
>Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.
Yes. If anything this is an argument against alignment than for it. Regardless, i think they're realistically the best we can hope for as opposed to someone like Musk or the CCP.
In fact, as i see it, the best case scenario is an unaligned benevolent agi.
>Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.
You do realise that most of those things did dramatically help in pushing forward civilization and served as stepping stones for future progress. Their big downside was not being removed quickly enough when we had better options and weren't desperate anymore. A problem that doesn't really apply here.
In summation, i think your argument and this whole pause idea in general will support the least ethical people possible. It will end up accomplishing nothing but prolonging suffering and increasing the likelihood of a model made by said least ethical people on the off chance we somehow fix alignment in 6 months. It's a reactionary and fear based response to something even the experts are hesitant to say they understand. While i am glad the issue is being discussed in the mainstream... I think ideally the focus should now shift towards more material institutions and preparing society for what's coming economically then childish/predatory ideas like a pause. This idea is simultaneously impractical, illogical and likely to cause harm even if implemented semi ideally.
AlFrankensrevenge t1_jegka1o wrote
There are so many half-baked assumptions in this argument.
-
Somehow, pausing for 6 months means bad actors will get to AGI first. Are they less than 6 months behind? Is their progress not dependent on our progress, so if we don't advance, they can't steal our advances? We don't know the answer to either of those things.
-
AGI is so powerful that having bad guys get it first will "prolong suffering" I guess on a global scale, but if we get it 6 months earlier we can avoid that. Shouldn't we consider that this extreme power implies instead that everyone approach it with extreme caution the closer we get to AGI? We need to shout from the rooftops how dangerous this is, and put in place international standards and controls, so that an actor like China doesn't push forward blindly in an attempt at world dominance, only to backfire spectacularly. Will it be easy? Of course not! Is it possible? I don't know, but we should try. This letter is one step in trying. An international coalition needs to come together soon.
I'm quite certain one will. Maybe not now with GPT4, but soon, with whatever upgrade shocks us next. And then all of you saying how futile it is will forget you ever said that, and continue to think yourselves realists. You're not. You're a shortsighted, self-interested cynic.
GenderNeutralBot t1_jed901y wrote
Hello. In order to promote inclusivity and reduce gender bias, please consider using gender-neutral language in the future.
Instead of businessmen, use business persons or persons in business.
Thank you very much.
^(I am a bot. Downvote to remove this comment. For more information on gender-neutral language, please do a web search for "Nonsexist Writing.")
Viewing a single comment thread. View all comments