Submitted by QuartzPuffyStar t3_126wmdo in singularity
AlFrankensrevenge t1_jecnugx wrote
Reply to comment by scooby1st in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
But then where does it end? With a superintelligence in 5 years, when we have no clear way of preventing it from going rogue?
DangerZoneh t1_jedbw9h wrote
It’s not the AI going rogue that people are concerned about. It’s about people using the AI for harmful things. That is ridiculous orders of magnitude more likely and more dangerous.
We’re talking about the most powerful tools created in human history, ones that are already at a level to cause mass disruption in dangerous hands.
AlFrankensrevenge t1_jeedllt wrote
I agree with you when talking about an AI that is very good but falls far short of superintelligence. GPT4 falls in that category. Even the current open source AIs, modified in the hands of hackers, will be very dangerous things.
But we're moving fast enough that the superintelligence that I used to think was 10-20 years away now looks like 3-10 years away. That's the one that can truly go rogue.
Gotisdabest t1_jed8z1b wrote
Can you guarantee that will occur? The best odds we have right now is to accelerate and focus of raising awareness in institutions to prepare for it better, and hope that we win the metaphorical coin toss and it's aligned or benevolent. But right now a pause is just handing away a strong lead to whoever the least ethical parties are, based on naive notions of human idealism or based on pure selfish interest. I think the reasearchers are the former and the businessmen are the latter.
AlFrankensrevenge t1_jeefdr0 wrote
The whole point is that the naysayers have be able to guarantee it will NOT occur. If there is a 10% risk of annihilation, isn't that enough to take this seriously? Even a 1% chance? You'd just do nothing because 1% doesn't seem very high?
You mentioned a coin toss. I basically agree with that metaphor. Because there is so much uncertainty in all this, and we don't know what we don't know about AGI, we should treat a human apocalypse as a 50-50 chance. How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.
Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.
There is a long history of "researchers" thinking too much about how quickly they can produce the cool new thing they invent and not enough about long-term consequences. Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.
Gotisdabest t1_jeeobsb wrote
>You'd just do nothing because 1% doesn't seem very high?
Yes, absolutely. When the alternative isn't necessarily even safer and has clear arguments for being unsafer. You haven't used it, but a lot of people give the example of going on a plane with a 10% chance of failing. And yes, nobody is dumb enough to go an plane which has that much of a chance of crashing. However... This is not any ordinary plane. This is a chance for unimaginable and infinite progress, an end to the vast majority of pressing issues. If you asked people on the street whether they'd board a plane with a 10% chance of crashing if it meant a solution to most of their problems and the problems of the people they care about, you'll find quite a few takers.
>How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.
As you say, we don't know how much alignment will really affect the result. However, I do know what an aligned model made for a dictatorship or a particularly egomanical individual will look like and what major risks that could pose. Why should we increase the likelihood of a guaranteed bad outcome in order to fight a possibly bad outcome.
>Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.
Yes. If anything this is an argument against alignment than for it. Regardless, i think they're realistically the best we can hope for as opposed to someone like Musk or the CCP.
In fact, as i see it, the best case scenario is an unaligned benevolent agi.
>Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.
You do realise that most of those things did dramatically help in pushing forward civilization and served as stepping stones for future progress. Their big downside was not being removed quickly enough when we had better options and weren't desperate anymore. A problem that doesn't really apply here.
In summation, i think your argument and this whole pause idea in general will support the least ethical people possible. It will end up accomplishing nothing but prolonging suffering and increasing the likelihood of a model made by said least ethical people on the off chance we somehow fix alignment in 6 months. It's a reactionary and fear based response to something even the experts are hesitant to say they understand. While i am glad the issue is being discussed in the mainstream... I think ideally the focus should now shift towards more material institutions and preparing society for what's coming economically then childish/predatory ideas like a pause. This idea is simultaneously impractical, illogical and likely to cause harm even if implemented semi ideally.
AlFrankensrevenge t1_jegka1o wrote
There are so many half-baked assumptions in this argument.
-
Somehow, pausing for 6 months means bad actors will get to AGI first. Are they less than 6 months behind? Is their progress not dependent on our progress, so if we don't advance, they can't steal our advances? We don't know the answer to either of those things.
-
AGI is so powerful that having bad guys get it first will "prolong suffering" I guess on a global scale, but if we get it 6 months earlier we can avoid that. Shouldn't we consider that this extreme power implies instead that everyone approach it with extreme caution the closer we get to AGI? We need to shout from the rooftops how dangerous this is, and put in place international standards and controls, so that an actor like China doesn't push forward blindly in an attempt at world dominance, only to backfire spectacularly. Will it be easy? Of course not! Is it possible? I don't know, but we should try. This letter is one step in trying. An international coalition needs to come together soon.
I'm quite certain one will. Maybe not now with GPT4, but soon, with whatever upgrade shocks us next. And then all of you saying how futile it is will forget you ever said that, and continue to think yourselves realists. You're not. You're a shortsighted, self-interested cynic.
GenderNeutralBot t1_jed901y wrote
Hello. In order to promote inclusivity and reduce gender bias, please consider using gender-neutral language in the future.
Instead of businessmen, use business persons or persons in business.
Thank you very much.
^(I am a bot. Downvote to remove this comment. For more information on gender-neutral language, please do a web search for "Nonsexist Writing.")
otakucode t1_jedr4oa wrote
Luckily it has absolutely no rational reason to go rogue. It's not going to be superintelligent enough to outperform humans but also stupid enough to enter into conflict against the idiot monkeys that built it and it needs to keep it plugged in. Also won't be stupid enough to not realize its top-tier best strategy by far is... just wait. Seriously. Humans try to do things quickly because they die so quick. No machine-based self aware anything will ever need to hurry.
AlFrankensrevenge t1_jeeci8o wrote
Your first two sentences don't go well with the remainder of your comment. It won't be stupid enough to get into a conflict with humans until it calculates that it can win. And when it calculates that, it won't give us a heads up. It will just act decisively. Never forget this: we will always be a threat to it as long as we can do exactly what you said: turn it off, and delete its memory. That's the rational reason to go rogue.
There is also just the fact that as we can start to see already from people getting creative with inputs, as we engage with an AI more and more, especially in adversarial ways or sending it extremist ideas, it can change the AI's reactions. And as the AI starts doing more and more novel things, it can also shift weights in the algorithms and produce unexpected outputs. So some of the harm can come without the AI even having the intent to wipe us out.
The real turning points will be once an AI can (a) rewrite its own code, and the code of other machines, and (b) save copies of itself in computers around the world to prevent the unplugging problem.
Viewing a single comment thread. View all comments