Viewing a single comment thread. View all comments

scooby1st t1_jebjb4l wrote

The word you're looking for is astroturfing.

Small amount of redditors can influence thousands of the morons until the "discussion" is a bunch of people in a circlejerk where everyone gets to be mad and validated.

I sincerely hope more people in the world have the ability to critically think than I am seeing on the internet.

I disagree with that open letter because the US doesn't have the ability to control China from doing the same research without going to war. So it's prisoner's dilemma and we don't have much choice to either continue advancing the technology ourselves or shoot ballistic missiles at China if they start doing the same and start getting scarily good at it. We'd rather not get to that point.

28

redpandabear77 t1_jedbblg wrote

Every thread was just a circlejerk saying ELON BAD!!!! So pointless.

3

AlFrankensrevenge t1_jecnugx wrote

But then where does it end? With a superintelligence in 5 years, when we have no clear way of preventing it from going rogue?

1

DangerZoneh t1_jedbw9h wrote

It’s not the AI going rogue that people are concerned about. It’s about people using the AI for harmful things. That is ridiculous orders of magnitude more likely and more dangerous.

We’re talking about the most powerful tools created in human history, ones that are already at a level to cause mass disruption in dangerous hands.

4

AlFrankensrevenge t1_jeedllt wrote

I agree with you when talking about an AI that is very good but falls far short of superintelligence. GPT4 falls in that category. Even the current open source AIs, modified in the hands of hackers, will be very dangerous things.

But we're moving fast enough that the superintelligence that I used to think was 10-20 years away now looks like 3-10 years away. That's the one that can truly go rogue.

1

Gotisdabest t1_jed8z1b wrote

Can you guarantee that will occur? The best odds we have right now is to accelerate and focus of raising awareness in institutions to prepare for it better, and hope that we win the metaphorical coin toss and it's aligned or benevolent. But right now a pause is just handing away a strong lead to whoever the least ethical parties are, based on naive notions of human idealism or based on pure selfish interest. I think the reasearchers are the former and the businessmen are the latter.

1

AlFrankensrevenge t1_jeefdr0 wrote

The whole point is that the naysayers have be able to guarantee it will NOT occur. If there is a 10% risk of annihilation, isn't that enough to take this seriously? Even a 1% chance? You'd just do nothing because 1% doesn't seem very high?

You mentioned a coin toss. I basically agree with that metaphor. Because there is so much uncertainty in all this, and we don't know what we don't know about AGI, we should treat a human apocalypse as a 50-50 chance. How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.

Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.

There is a long history of "researchers" thinking too much about how quickly they can produce the cool new thing they invent and not enough about long-term consequences. Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.

2

Gotisdabest t1_jeeobsb wrote

>You'd just do nothing because 1% doesn't seem very high?

Yes, absolutely. When the alternative isn't necessarily even safer and has clear arguments for being unsafer. You haven't used it, but a lot of people give the example of going on a plane with a 10% chance of failing. And yes, nobody is dumb enough to go an plane which has that much of a chance of crashing. However... This is not any ordinary plane. This is a chance for unimaginable and infinite progress, an end to the vast majority of pressing issues. If you asked people on the street whether they'd board a plane with a 10% chance of crashing if it meant a solution to most of their problems and the problems of the people they care about, you'll find quite a few takers.

>How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.

As you say, we don't know how much alignment will really affect the result. However, I do know what an aligned model made for a dictatorship or a particularly egomanical individual will look like and what major risks that could pose. Why should we increase the likelihood of a guaranteed bad outcome in order to fight a possibly bad outcome.

>Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.

Yes. If anything this is an argument against alignment than for it. Regardless, i think they're realistically the best we can hope for as opposed to someone like Musk or the CCP.

In fact, as i see it, the best case scenario is an unaligned benevolent agi.

>Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.

You do realise that most of those things did dramatically help in pushing forward civilization and served as stepping stones for future progress. Their big downside was not being removed quickly enough when we had better options and weren't desperate anymore. A problem that doesn't really apply here.

In summation, i think your argument and this whole pause idea in general will support the least ethical people possible. It will end up accomplishing nothing but prolonging suffering and increasing the likelihood of a model made by said least ethical people on the off chance we somehow fix alignment in 6 months. It's a reactionary and fear based response to something even the experts are hesitant to say they understand. While i am glad the issue is being discussed in the mainstream... I think ideally the focus should now shift towards more material institutions and preparing society for what's coming economically then childish/predatory ideas like a pause. This idea is simultaneously impractical, illogical and likely to cause harm even if implemented semi ideally.

0

AlFrankensrevenge t1_jegka1o wrote

There are so many half-baked assumptions in this argument.

  1. Somehow, pausing for 6 months means bad actors will get to AGI first. Are they less than 6 months behind? Is their progress not dependent on our progress, so if we don't advance, they can't steal our advances? We don't know the answer to either of those things.

  2. AGI is so powerful that having bad guys get it first will "prolong suffering" I guess on a global scale, but if we get it 6 months earlier we can avoid that. Shouldn't we consider that this extreme power implies instead that everyone approach it with extreme caution the closer we get to AGI? We need to shout from the rooftops how dangerous this is, and put in place international standards and controls, so that an actor like China doesn't push forward blindly in an attempt at world dominance, only to backfire spectacularly. Will it be easy? Of course not! Is it possible? I don't know, but we should try. This letter is one step in trying. An international coalition needs to come together soon.

I'm quite certain one will. Maybe not now with GPT4, but soon, with whatever upgrade shocks us next. And then all of you saying how futile it is will forget you ever said that, and continue to think yourselves realists. You're not. You're a shortsighted, self-interested cynic.

0

GenderNeutralBot t1_jed901y wrote

Hello. In order to promote inclusivity and reduce gender bias, please consider using gender-neutral language in the future.

Instead of businessmen, use business persons or persons in business.

Thank you very much.

^(I am a bot. Downvote to remove this comment. For more information on gender-neutral language, please do a web search for "Nonsexist Writing.")

−1

otakucode t1_jedr4oa wrote

Luckily it has absolutely no rational reason to go rogue. It's not going to be superintelligent enough to outperform humans but also stupid enough to enter into conflict against the idiot monkeys that built it and it needs to keep it plugged in. Also won't be stupid enough to not realize its top-tier best strategy by far is... just wait. Seriously. Humans try to do things quickly because they die so quick. No machine-based self aware anything will ever need to hurry.

1

AlFrankensrevenge t1_jeeci8o wrote

Your first two sentences don't go well with the remainder of your comment. It won't be stupid enough to get into a conflict with humans until it calculates that it can win. And when it calculates that, it won't give us a heads up. It will just act decisively. Never forget this: we will always be a threat to it as long as we can do exactly what you said: turn it off, and delete its memory. That's the rational reason to go rogue.

There is also just the fact that as we can start to see already from people getting creative with inputs, as we engage with an AI more and more, especially in adversarial ways or sending it extremist ideas, it can change the AI's reactions. And as the AI starts doing more and more novel things, it can also shift weights in the algorithms and produce unexpected outputs. So some of the harm can come without the AI even having the intent to wipe us out.

The real turning points will be once an AI can (a) rewrite its own code, and the code of other machines, and (b) save copies of itself in computers around the world to prevent the unplugging problem.

2