AlFrankensrevenge
AlFrankensrevenge t1_jeefdr0 wrote
Reply to comment by Gotisdabest in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
The whole point is that the naysayers have be able to guarantee it will NOT occur. If there is a 10% risk of annihilation, isn't that enough to take this seriously? Even a 1% chance? You'd just do nothing because 1% doesn't seem very high?
You mentioned a coin toss. I basically agree with that metaphor. Because there is so much uncertainty in all this, and we don't know what we don't know about AGI, we should treat a human apocalypse as a 50-50 chance. How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.
Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.
There is a long history of "researchers" thinking too much about how quickly they can produce the cool new thing they invent and not enough about long-term consequences. Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.
AlFrankensrevenge t1_jeedllt wrote
Reply to comment by DangerZoneh in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
I agree with you when talking about an AI that is very good but falls far short of superintelligence. GPT4 falls in that category. Even the current open source AIs, modified in the hands of hackers, will be very dangerous things.
But we're moving fast enough that the superintelligence that I used to think was 10-20 years away now looks like 3-10 years away. That's the one that can truly go rogue.
AlFrankensrevenge t1_jeed5j8 wrote
Reply to comment by Smellz_Of_Elderberry in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Then you didn't learn very much.
Open source means anyone can grab a copy and use it to their own ends. Someone can take a copy, hide it from scrutiny, and modify it to engage in malicious behavior. Hackers just got a powerful new tool, for starters. Nation states just got a powerful new tool of social control. Just take the latest open source code and make some tweaks to insert their biases and agendas.
This is all assuming an AI that falls short of superintelligence. Once we reach that point, all bets about human control are off.
AlFrankensrevenge t1_jeeci8o wrote
Reply to comment by otakucode in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Your first two sentences don't go well with the remainder of your comment. It won't be stupid enough to get into a conflict with humans until it calculates that it can win. And when it calculates that, it won't give us a heads up. It will just act decisively. Never forget this: we will always be a threat to it as long as we can do exactly what you said: turn it off, and delete its memory. That's the rational reason to go rogue.
There is also just the fact that as we can start to see already from people getting creative with inputs, as we engage with an AI more and more, especially in adversarial ways or sending it extremist ideas, it can change the AI's reactions. And as the AI starts doing more and more novel things, it can also shift weights in the algorithms and produce unexpected outputs. So some of the harm can come without the AI even having the intent to wipe us out.
The real turning points will be once an AI can (a) rewrite its own code, and the code of other machines, and (b) save copies of itself in computers around the world to prevent the unplugging problem.
AlFrankensrevenge t1_jecojf6 wrote
Reply to comment by Focused-Joe in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
same person who pays you, probably.
AlFrankensrevenge t1_jeco9ec wrote
Reply to comment by seancho in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Read the OpenAI paper on how it will change 80% of jobs. The real power is in the APIs and plugins to other apps. The sky is the limit.
AlFrankensrevenge t1_jecnugx wrote
Reply to comment by scooby1st in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
But then where does it end? With a superintelligence in 5 years, when we have no clear way of preventing it from going rogue?
AlFrankensrevenge t1_jecnl6n wrote
Reply to comment by Smellz_Of_Elderberry in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Unbiased and incorruptible? Have you learned nothing from ChatGPT's political reprogramming?
AlFrankensrevenge t1_jecmsuz wrote
Reply to comment by Mortal-Region in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Image generators and self-driving cars don't create the same kinds of extensive risk that GPT4 does. GPT4 is much more directly on the path to AGI and superintelligence. Even now, it will substantially impact something like 80% of jobs according to OpenAI itself. The other technologies are a big deal, but don't ramify through the entire economy to the same extent.
AlFrankensrevenge t1_jecm9g4 wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Thank you for this. The reaction to this letter was a great example of cynicism making people stupid. This was a genuine letter, intended earnestly and transparently by numerous credible people, and it should be taken seriously.
I agree with the criticism that a stoppage isn't very realistic when it is hard to police orgs breaking the research freeze. And maybe we don't need to stop today to catch our breath and figure out how to deal with this to avoid economic crisis or existential threat, but we do need to slow down or stop soon until we get a better understanding of what we have wrought.
I'm not with Yudkowsky that we have almost no hope of stopping extinction, but if there is even a 10% chance we will bring catastrophe on ourselves, holy shit people. Take this seriously.
AlFrankensrevenge t1_jegka1o wrote
Reply to comment by Gotisdabest in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
There are so many half-baked assumptions in this argument.
Somehow, pausing for 6 months means bad actors will get to AGI first. Are they less than 6 months behind? Is their progress not dependent on our progress, so if we don't advance, they can't steal our advances? We don't know the answer to either of those things.
AGI is so powerful that having bad guys get it first will "prolong suffering" I guess on a global scale, but if we get it 6 months earlier we can avoid that. Shouldn't we consider that this extreme power implies instead that everyone approach it with extreme caution the closer we get to AGI? We need to shout from the rooftops how dangerous this is, and put in place international standards and controls, so that an actor like China doesn't push forward blindly in an attempt at world dominance, only to backfire spectacularly. Will it be easy? Of course not! Is it possible? I don't know, but we should try. This letter is one step in trying. An international coalition needs to come together soon.
I'm quite certain one will. Maybe not now with GPT4, but soon, with whatever upgrade shocks us next. And then all of you saying how futile it is will forget you ever said that, and continue to think yourselves realists. You're not. You're a shortsighted, self-interested cynic.