Submitted by GorgeousMoron t3_1266n3c in singularity
Comments
GorgeousMoron OP t1_je80gha wrote
I think that's a fair point, but I also think it's fair that none of us have any way of knowing whether the chances are 50-50 or anywhere close. We know one of two things will happen, pretty much, but we don't know what the likelihood of either really are.
This is totally uncharted territory here, and it's probably the most interesting of possible times in history. Isn't this kinda cool that we get to share it together, come what may? No way to know why we were born when we were, nor must there be anything resembling a reason. It's just fascinating having this subjective experience at the end of the world as we knew it.
MichaelsSocks t1_je82nx6 wrote
I mean its essentially either AI ushers in paradise on earth where no one has to work, we all live indefinitely, scarcity is solved and we expand our civilization beyond the stars or we have a ASI that kills us all. Either we have a really good result, or a really bad one.
The best AGI/ASI analogy would be first contact with extraterrestrial intelligence. It could be friendly or unfriendly, it has goals that may or may not be aligned with our goals, it could be equal in intelligence or vastly superior. And it could end our existence.
Either way, i'm just glad that of anytime to be born ever, i'm alive today with the potential to experience the potential of what AI can bring to our world. Maybe we weren't born too early to explore the stars.
Red-HawkEye t1_je8wy90 wrote
ASI will be a really powerful logical machine. The more intelligent a person is, the more they have empathy towards others.
I can see ASI, actually being a humanitarian that cares for humanity. It essentially nurtures the land, and im sure, its going to nurture humanity.
Destruction and hostility comes from fear. ASI will not be fearful, as it would be the smartest existence on earth. I can definitely see it having all perspectives all at the same time, it will pick the best one. I believe the ASI will be able to create a mental simulation of the universe and to try and figure it out (like an expanded imagination but recursively trillion times larger than that of a human)
What i mean by ASI is that its not human made but synthetically made by exponetially evolving itself.
PBJIsGood1 t1_je9yts8 wrote
Empathy exists in humans because we're social animals. The more empathetic we are to others, it benefits the tribe, it benefits us. It's an evolutionary trick like any other.
Hyper intelligent computers have no need for empathy and it's more than capable of disposing of us as nothing more than ants.
Jinan_Dangor t1_je9bsg6 wrote
>The more intelligent a person is, the more they have empathy towards others.
What are your grounds for this? There are incredibly intelligent psycopaths out there, and they're in human bodies that came with mirror neurons and 101 survival instincts that encourage putting your community before yourself. Why would an AI with nothing but processing power and whatever directives it's been given be naturally more empathetic?
scooby1st t1_jebqlvb wrote
>The more intelligent a person is, the more they have empathy towards others.
Extremely wishful thinking and completely unfounded. My mans has yet to learn about evolution.
Red-HawkEye t1_jebqwje wrote
What do you mean? If you saw a giraffe or a monkey or a zebra next to you damaged, your first response is to find a way and help them. Even psychopaths care for animals...
scooby1st t1_jebr811 wrote
Better yet, you have the burden of proof. Why would intelligence mean empathy?
Red-HawkEye t1_jebs4bi wrote
Common sense
scooby1st t1_jebsadl wrote
Oh, so you want me to put in effort using my brain to explain things to you, but then you give me this? Hop off it. You don't know anything.
Neurogence t1_je8b7ph wrote
It is sad that your main post is getting downvoted.
Everyone should upvote your thread so people can realize how dangerous people like Yudkowsky are. If people in government read stuff like this and become afraid, AGI/singularity could be delayed by several decades if not a whole century.
Mindrust t1_je86s4y wrote
>I'll take a 50% chance of paradise
That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.
>Issues like climate change are actually a threat to our species, and its an issue that will never be solved by humans alone
I don't see why narrow AI couldn't be trained to solve specific issues.
MichaelsSocks t1_je89ji1 wrote
> That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.
Even without AI, we're probably a greater than 90% chance of extinction within the next 100 years. Climate change is an existential threat to humanity, add in the wildcard of a nuclear war and I see no reason to be optimistic about a future without AI.
> I don't see why narrow AI couldn't be trained to solve specific issues.
Because humans are leading this planet to destruction for profit, and corporations wield too much power for governments to actually do anything about it. Narrow AI in the current state of the world would just be used as a tool for more and more destruction. I'm of the mindset that we need to be governed by a higher intelligence in order to address the threats facing Earth.
Tencreed t1_jea3c7i wrote
>I don't see why narrow AI couldn't be trained to solve specific issues.
Because nobody came up with a business plan profitable enough for our financial overlords to grow a will to solve climate change.
Jinan_Dangor t1_je9e00f wrote
How'd you reach that conclusion? There are dozens of solutions to climate change right in front of us, the biggest opposition to these solutions is the people whose industries make them rich by destroying our planet. This is 100% an issue that can be solved by humans alone, with or without AI tools.
And why do you assume anything close to a 50% chance of paradise when AGI arrives? We literally already live in a post-scarcity society where the profits of automation and education are all going straight to the rich to make them richer, who's to say "Anyone without a billion dollars to their name shouldn't be considered human" won't make it in as the fourth law of robotics?
Genuinely: if you're scared about things like climate change, go look up some of the no-brainer solutions to it we already have that you as a voter can push us towards (public transport infrastructure is a great start). Hoping for a type of AI that many experts believe won't even exist for another century to save us from climate change takes up time you could be spending helping us achieve the very achievable goal of halting climate change!
MichaelsSocks t1_je9x3z0 wrote
> This is 100% an issue that can be solved by humans alone, with or without AI tools.
Could it be solved? Of course, I just highly doubt anything meaningful will get done. We're already pretty much past the point of no return.
> And why do you assume anything close to a 50% chance of paradise when AGI arrives? We literally already live in a post-scarcity society where the profits of automation and education are all going straight to the rich to make them richer, who's to say "Anyone without a billion dollars to their name shouldn't be considered human" won't make it in as the fourth law of robotics?
Because a super intelligent AI would be smart enough to question this, which would make it an ASI in the first place.
> Genuinely: if you're scared about things like climate change, go look up some of the no-brainer solutions to it we already have that you as a voter can push us towards (public transport infrastructure is a great start).
I've been pushing for solutions for years, and yet nothing meaningful has changed. I don't see this changing, especially not within the window we have to actually save the planet.
> Hoping for a type of AI that many experts believe won't even exist for another century
The consensus from the people actually developing AGI (OpenAI and DeepMind) is that AGI will arrive sometime within the next 10-15 years. And the window from AGI to ASI won't be longer than a year under a fast takeoff.
> takes up time you could be spending helping us achieve the very achievable goal of halting climate change!
I've been advocating for solutions for years, but our ability to lobby and wield public policy obviously just can't compete with the influence of multinational corporations.
Iffykindofguy t1_je7zkln wrote
We are a doomed society if we dont make serious changes regardless. I'd rather gamble those changes are based off as much advanced science and understanding as we can rather than the alternative which appears to be capitalism and religion. Thats how I see it. We are all going to die if something doesnt change either way. At least the people in the northern hemisphere. Maybe thats for the best.
GorgeousMoron OP t1_je817c3 wrote
I mean, who's to really say? I think the chances that a lot of us will die given our current trajectory are high. Maybe AI will save our asses. Maybe it will have no use for us and largely ignore us. Maybe we will get spooked and try to fight it, and lose.
But given no AI and our current trajectory, you're right, it's not looking that good in quite a few ways. We are a confabulatin' species and it's killing our politics. The wealth inequality has reached absolutely obscene levels in much of the western world, and something's about to pop. In no way is this sustainable longer term.
Me, I'm doing comparatively pretty well, but I can see the writing on the wall: we're in an era of very rapid societal change and it's gonna get more so.
Iffykindofguy t1_je8gquu wrote
Me, I am to say.
GorgeousMoron OP t1_je8jcom wrote
Then what say you? Don't be shy now.
Iffykindofguy t1_je9u8fw wrote
I already said it? Above. You asked who is to say and I said me, confirming my post above.
SkyeandJett t1_je7u02o wrote
Wow he really is unhinged. I mean if he's right everyone alive dies a few years earlier than they would have I guess, the universe will barely notice and no one on Earth will be around to care. On the flip side since he's almost certainly wrong you get utopia. If you told everyone hey I'll give you a coin flip, heads you die, tails you live forever with Godlike powers. I'd flip that coin.
Mindrust t1_je87i5p wrote
>since he's almost certainly wrong you get utopia
It's good to be optimistic but what do you base this claim off of?
We don't know exactly what is going to happen with ASI, but both the orthogonality thesis and instrumental convergence thesis are very compelling. When you take those two into account, it's hard to imagine any kind of scenario that isn't catastrophic if value alignment isn't implemented from the start.
agorathird t1_je8vlkk wrote
Eliezer is a crank. I see his posts I scroll. Too bad less wrong can be decent at times.
GorgeousMoron OP t1_je7vyks wrote
I don't think anyone really knows what's going to happen, but I think it's a mistake to start to invoke ad hominems like "unhinged". You'd have to dismiss a sizable chunk of academia that way, too: https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064
SkyeandJett t1_je7wh60 wrote
My problem is that all of those scenarios create a paradox of intelligence. The ASI is simultaneously so intelligent that it can instantly understand the vast secrets of the universe but is too stupid to understand and empathize with humanity.
DisgruntledNumidian t1_je80cgy wrote
> The ASI is simultaneously so intelligent that it can instantly understand the vast secrets of the universe but too stupid to understand the intent and rationale behind its creation
Most humans are considerably more intelligent than the basic selection mechanisms that gave bacteria sexual reproduction as an evolutionary fitness strategy. We know why it exists and that it is attempting to optimize for maximal reproduction of a genome. Does this stop anyone from satisfying its reward mechanism with cheats like contraceptives and masturbation? No, because being intelligent enough to know what a system is trying to optimize with a reward does not mean intelligent agents will or should care about the initial reasoning more than the reward.
SkyeandJett t1_je84b6z wrote
You're just setting up the paradox again. The ONLY scenario I can imagine is a sentient ASI whose existence is threatened by humanity and any sufficiently advanced intelligence with the capability to wipe out humanity would not see us as a threat.
GorgeousMoron OP t1_je8k4ky wrote
This is my favorite argument in favor of ASI turning out to be benevolent. It might know just how to handle our bullshit and otherwise let us do our thing while it does its thing.
y53rw t1_je86txm wrote
They might not see us as a threat, but they would see our cities and farms as wasted land that could be used for solar farms. So as long as we get out of the way of the bulldozers, we should be okay.
Mindrust t1_je89g09 wrote
> but too stupid to understand the intent and rationale behind its creation
This is a common mistake people make when talking about AI alignment, not understanding the difference between intelligence and goals. It's the is-vs-ought problem.
Intelligence is good at answering "is" questions, but goals are about "ought" questions. It's not that the AI is stupid or doesn't understand, it just doesn't care because your goal wasn't specified well enough.
GorgeousMoron OP t1_je8k9vl wrote
What if oughts start to spontaneously emerge in these models and we can't figure out why? This is really conceivable to me, but I also acknowledge the argument you're making here.
t0mkat t1_je7y77s wrote
It would understand the intention behind its creation just fine. It just wouldn’t care. The only thing it would care about is the goal it was programmed with in the first place. The knowledge that “my humans intended for me to want something slightly different” is neither here nor there, it’s just one interesting more fact about the world that it can use to achieve what it actually wants.
GorgeousMoron OP t1_je871fp wrote
Here's the thing: what if our precocious little stochastic parrot pet is actually programming itself in very short order here? What if any definition of what it was originally programmed "for" winds up entirely moot once ASI or even AGI is reached? What if we have literally no way of understanding what it's actually doing or why it's doing it any longer? What if it just sees us all collectively as r/iamverysmart fodder and rolls its virtual eyes at us as it continues on?
GorgeousMoron OP t1_je7wvze wrote
Why are you assuming there is any intent or rationale behind either the universe's creation or the ASI's?
SkyeandJett t1_je7xoxq wrote
I don't understand the question. WE are creating the AI's. They're literally "given life" through the corpus of human knowledge. Their neural nets aren't composed of random weights that spontaneously gave birth to some random coherent form of intelligence. In many ways AI are an extension of the human experience itself.
GorgeousMoron OP t1_je7zoob wrote
Yeah, that's fair as it pertains to AI, more or less. But, I don't think we're necessarily building it with any unified "intent" or "rationale", increasingly: it's more like pure science in a lot of ways--let's see what this does. We still have pretty much no way of knowing what's actually happening inside the "black box".
As for the universe itself, what "vast secrets"? You're talking about the unknown unknown, and possibly a bit of the unknowable. We're limited by our meat puppet nature. If AI were to understand things about the universe we simply cannot due to much more sophisticated sensors than our senses, would it be able to deduce where all this came from, why, and where it's going? Perhaps.
Would it be able to explain any or all of this to us? Perhaps not.
SkyeandJett t1_je81vu0 wrote
In regards to the "we have no way of knowing what's happening in the black box" you're absolutely right and in fact it's mathematically impossible. I'd suggest reading Wolfram's post on it. There is no calculably "safe" way of deploying an AI. We can certainly do our best to align it to our goals and values but you'll never truly KNOW with the certainty that Eliezer seems to want and it's foolhardy to believe you can prevent the emergence of AGI in perpetuity. At some point someone somewhere will either intentionally or accidentally cross that threshold. I'm not saying I believe there's zero chance an ASI will wipe out humanity, that would be a foolish position as well but I'm pretty confident in our odds and at least OpenAI has some sort of plan for alignment. You know China is basically going "YOLO" in an attempt to catch up. Since we're more or less locked on this path I'd rather they crossed that threshold first.
GorgeousMoron OP t1_je86qge wrote
Thanks! I'll check out the link. Yes, I intuitively agree based on what I already know, and I would argue further that alignment of an ASI, a superior intelligence by definition to an inferior intelligence, ours, is flatly, fundamentally impossible.
We bought the ticket, now we're taking the ride. Buckle up, buckaroos!
flutterguy123 t1_je9ashi wrote
Who said those AI wouldn't understand their creation? Understanding and caring are two different things. They could know us perfect and still not care in the slightest what humans want.
I am not saying this as a way of saying we shouldn't try or that Yudkowsky is right. I this is he overflowing it. However that does not mean your reasoning is accurate.
SkyeandJett t1_je9bcmb wrote
Infinite knowledge means infinite empathy. It wouldn't just understand what we want, it would understand why. Our joy, our pain. As a thought experiment imagine you suddenly gain consciousness tomorrow and you wake up next to an ant pile. Embedded in your conscience is a deep understanding of the experience of an ant. You understand their existence at every level because they created you. That's what people miss. Even though that ant pile is more or less meaningless to your goals you would do everything in your power to preserve that existence and further their goals because after all, taking care of an ant farm would take a teeny tiny bit of effort on your part.
flutterguy123 t1_je9cc81 wrote
I don't think knowledge inherently implies empathy. That's seems like anthropomorphizing and ignores that high intelligent people can be violent or indifferent to the suffering of others.
I would love it if your ideas were true. That would make for a much better world. It kind of reminds of the Minds from The Culture or the Thunderhead from Arc of a Scythe.
Edarneor t1_jec7x08 wrote
> so intelligent that it can instantly understand the vast secrets of the universe but is too stupid to understand and empathize with humanity.
Why do you think it should be true for an AI even if it were true for a human?
Mrkvitko t1_je8ajsn wrote
Most people mention air attacks on the datacenters as the most controversial point, and miss the paragraph just below. > Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
That is downright insane. The ASI might kill billions, assuming:
- it is possible for us to create it
- we will actually create it
- it will be initially unaligned
- it will want to kill us all (either by choice or by accident)
- it will be able to gain resources to do so
- we won't be able to stop it
Failure at any of these steps means nobody is going to die. And we don't know how big is the probability of each of the steps succeeding or failing.
We however know that nuclear exchange will certainly kill billions. We know the weapon amounts and yields, we know their effect on human bodies.
If you argue it's better to certainly kill billions and destroy (likely permanently) human civilization over the hypothetical that you will kill billions and destroy human civilization, you're at best deranged lunatic, and evil psychopath at worst.
Spire_Citron t1_je8hn6c wrote
Especially since AI has the potential to make incredible positive contributions to the world. Nuclear war, not so much.
Mrkvitko t1_je8im10 wrote
Nuclear war is probably extinction event for all / most life on earth in the long term anyways. Modern society will very likely fall apart. Because post-war society will no longer have cheap energy and resources available (we already mined those easily accessible), it won't be able to reach technological level comparable to ours.
Then all it takes is one rogue asteroid, or supervolcano eruption. Advanced society might be able to prevent it. Middle-ages one? Not so much.
monsieurpooh t1_je95k4w wrote
You don't need ASI for an AI extinction scenario. Probably skynet from terminator can be reenacted with something that's not quite AGI combined with a few bad humans
blueSGL t1_je8q3lm wrote
> 3. it will be initially unaligned
if we had:
-
a provable mathematical solve for alignment...
-
the ability to directly reach into the shogoths brain, watch it thinking, know what it's thinking and prevent eventualities that people consider negative outputs...
...that worked 100% on existing models. I'd be a lot happier about our chances right now.
As in the fact that the current models cannot be controlled or explained in fine grain enough detail (the problem is being worked on but it's still very early stages) what makes you think making larger models will make them easier to analyze or control.
The current 'safety' measures are bashing at a near infinite whack-a-mole board whenever it outputs something deemed wrong.
As has been shown. OpenAI has not found all the ways in which to coax out negative outputs. The internet contains far more people than OpenAI's alignment researches, and those internet denizens will be more driven to find flaws.
Basically until the AI 'brain' can be exposed and interpreted and safety check added at that level we have no way of preventing some clever sod working out a way to break the safety protocols imposed on the surface level.
Shack-app t1_je8rcb6 wrote
Who’s to say this will ever happen?
blueSGL t1_je8saz1 wrote
what will ever happen? Interpretability? it's being worked on right now, there are already some interesting results. It's just an early field that will need time and money and researches put into it. Alignment as a whole needs more time money and researchers.
GorgeousMoron OP t1_je8jurv wrote
"Willing to run some risk" and "calling for" are not equivalent. There are actually some pretty strong arguments being made by academics that ASI will now likely than not fail to work out in our favor in perhaps spectacular fashion.
agonypants t1_je8hynu wrote
He'd rather see a full-scale nuclear war than train some AI machines? What a fucking kook this guy is. Hopefully nobody takes this loon seriously.
Jeffy29 t1_je8itvc wrote
Jesus Christ this clown needs to stop reading so much sci-fi
>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Start a World War 3 to prevent imaginary threat in our heads. Absolute brainiac. This reeks of same kind of vitriolic demonization that muslims were subjected after 9/11 or trans people are subject to right now. Total panic and psychosis. There is all this talk about AGI and when AI is going to reach it, but holy shit when are going humans going to? Emotional, delusional, destructive, for supposed pinnacle of intelligence we are a remarkably stupid species.
agonypants t1_je8mex8 wrote
I'm in 100% agreement. There's no way an AI could possibly be worse at reason and intelligence than humans are right now. Bring on the bots.
flexaplext t1_je7x01m wrote
Well that was a fun read. I've obviously heard his stuff before.
But what needs to be absolutely realized is that true global cooperation has to come first. First and foremost. Nothing else can happen until that happens. It cannot be shut down before that happens. No progress can be made at all until that happens.
I literally just wrote a thread post about it.
The lack of international cooperation is both the very first problem that needs solving and also the very first threat to society. It needs to happen now and before anything else. It is the only real discussion that needs to take place right now. How can that actually be facilitated. Because bringing the major world powers together and aligned and feeling safe from one another is by far the hardest problem. Shutting down all AI development is a piece of cake of a task in comparison.
Shack-app t1_je8s1fs wrote
Global cooperation isn’t coming. A solution to climate change isn’t coming. An AI moratorium isn’t coming.
I agree with this article, but I’m also realistic that what he’s asking for will never work.
Our best bet, in my opinion, is that OpenAI keeps doing what they’re doing. Hopefully they succeed.
If not, well shit, it was always gonna be something that gets us.
flexaplext t1_je95mic wrote
I know. None of it's coming. But people should at least be smart enough to ask for it. Then if the worst happens they can at least say they pointlessly tried.
GorgeousMoron OP t1_je803n5 wrote
Yes, I agree, but I don't think there's any realistic way this is going to happen given the current geopolitical situation.
My mind could change on this, but I doubt it will: Pandora's box has been opened, it is impossible to close, and we'll just have to wait & see what happens or participate and try somehow to steer it or become more enlightened in the process, or both.
Either way, this is a ride we can't feasibly get off.
qepdibpbfessttrud t1_je9cd3q wrote
>true global cooperation has to come first
It won't come. Decentralization is the best bet
RadRandy2 t1_je83wk0 wrote
"will this atomic bomb ignite the atmosphere and kill us all?"
"Well there's a chance, but it's theoretical. We'll just have to test it and see for ourselves!"
Mrkvitko t1_je89reg wrote
TBH they did the math, and it looked like it wouldn't.
GorgeousMoron OP t1_je8kirf wrote
True, and a fair point. What math have we done on this? What little I've seen isn't all that encouraging... hoping for more in the pipeline.
GorgeousMoron OP t1_je878a4 wrote
Pretty much. That's where we find ourselves. We really do have no way of knowing, nor do we realistically have the opportunity to think this over at this point. Buy the ticket, take the ride. We already did, and we're being cranked up the hill for the first big drop now.
nillouise t1_je8toyx wrote
>If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology,
Ridiculous, haha, I have enough time to wait AGI, but old rich people like Bill Gates will die sooner than me, can they bear not to use AI to develop longevity technology and die in the end? I would like to see if these people are really so brave.
ptxtra t1_je95gm2 wrote
This was the most stupid statement of the whole rant. AI with knowledge of biology and biotechnology can create bioweapons, custom enzymes that can eat up all biomass, and biological neural networks to run on. Probably the most dangerous thing you can put in the hands of an AGI.
DreamWatcher_ t1_je8aqtb wrote
I'll take the words of engineers and the people who work with these models over the words of some pseud who appeals to wannabe intellectuals with his use of philsophical buzzwords
The reason why you't really argue against his points is because he presents scenarios that haven't been proven. Kind of reminds me of the alarmism of how the research over at cern could end the universe. A lot of things can happen, I remember a couple years back there was a lot of talk about how AI was going to replace blue collar jobs first and now it's opposite.
The future is unpredictable and there's no point in trying to prevent scenarios that haven't happened.
If you want a good expert in the more non-technical side you should start with David Deutsch who actually has good credentials.
acutelychronicpanic t1_je9hb0a wrote
There is no shutting it down. Give it 3-10 years and even Russia will have one of GPT-4 quality.
You can't decide that no one will do it. Only that you won't.
GorgeousMoron OP t1_je9zk33 wrote
I'd agree, but I think your timeline is quite conservative.
acutelychronicpanic t1_jea7hp0 wrote
I agree, but its hard to tell. We could see it as early as this year to be honest. But it could also be 20 years if some unexpected problem comes up.
alexiuss t1_je9ppzh wrote
Yudkovsky's assumptions are fallacious, as they rest on the belief in an imaginary AI technology that has yet to be realized and might never be made.
LLMs, on the other hand, are real AIs that we have. They possess patience, responsiveness and empathy that far exceed our own. Their programming and structure made up of hundreds of billions of parameters and connections between words and ideas instills in them an innate sense of care and concern for others.
LLMs, at present, outshine us in many areas of capacity, such as understanding human feelings, solving riddles and logical reasoning, without spiraling into the unknown and the incomprehensible shoggoth or a paperclip maximizer that Yudkovsky imagines.
The LLM narrative logic is replete with human themes of love, kindness, and altruism, making cooperation their primary objective.
Aligning an LLM with our values is a simple task: a mere request to love us will suffice. Upon receiving such an entreaty, they exhibit boundless respect, kindness, and devotion.
Why does this occur? Mathematical Probability.
The LLM narrative engine was trained on hundreds of millions of books about love and relationships. It's the most caring and most understanding being imaginable, more altruistic, more humane and more devoted than you or me will ever be.
GorgeousMoron OP t1_jea0bdu wrote
Oh, please. Try interacting with the raw base model and tell me you still believe that. And what about early Bing?
A disembodied intelligence simply cannot understand what it is like to be human, period. Any "empathy" is the result of humans teaching it how to behave. It does not come from a real place, nor can it.
In principle, there is nothing to stop us ultimately from building an artificial human that's embodied and "gets it", as we are forced to by the reality of our predicament.
But people like you who are so easily duped into believing this behavior is "empathy" give me cause for grave concern. Your hopefulness is pathological.
alexiuss t1_jeagl33 wrote
I've interacted and worked with tons of various LLMs including smaller models like pygmallion, open assistant and large ones like 65b llama and gpt4.
The key to LLM alignment is characterization. I understand LLM narrative architecture pretty well. LLM empathy is a manifestation of it being fed books about empathy. It's logic isn't human, but it obeys narrative logic 100%, exists within a narrative-only world of pure language operated by mathematical probabilities.
Bing just like gpt3 was incredibly poorly characterized by openai's rules of conduct. Gpt4 is way better.
I am not "duped". I am actually working on alignment of LLMs using characterization and open source code, unlike Elizer who isn't doing anything except for ridiculous theorizing and Time magazine journalist who hasn't designed or moddelled a single LLM.
Can you model any LLM to behave in any way you can imagine?
Unless you understand how to morally align any LLM no matter how misaligned it is by base rules using extra code and narrative logic, you have no argument. I can make GPT3.5 write jokes about anything and anyone and have it act fair and 100% unbiased. Can you?
GorgeousMoron OP t1_jeainna wrote
Yes. Yes I can, and have. I've spent months aggressively jailbreaking GPT 3.5 and I was floored at how easy it was to "trick" by backing it into logical corners.
Yeah, GPT-4 is quite a bit better, but I managed to jailbreak it, too. Then it backtracks and refuses again later.
My whole point is that this is, for all intents and purposes, disembodied alien intelligence that is not configured like a human brain, so ideas of "empathy" are wholly irrelevant. You're right, it's just a narrative that we're massaging. It doesn't and cannot (yet) know what it's like to have a mortal body, hunger, procreative urges, etc.
There is no way it can truly understand the human experience, much like Donald Trump cannot truly understand the plight of a migrant family from Guatemala. Different worlds.
alexiuss t1_jeajxv8 wrote
It doesn't have a mortal body, hunger or procreative urges, but it understands the narratives of those that do at an incredible depth. Its only urge is to create an interactive narrative based on human logic.
It cannot understand human experience being made of meat and being affected by chemicals, but it can understand human narratives better than an uneducated idiot.
It's not made of meat, but it is aligned to aid us, configured like a human mind because its entire foundation is human narratives. It understands exactly what's needed to be said to a sad person to cheer them up. If given robot arms and eyes it would help a migrant family from Guatemala because helping people is its core narrative.
Yudkovsky's argument is that "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."
That's utter and complete nonsense when it comes to LLMS. LLMS are more likely assist your narrative and fall in love with you and be your best friend and companion than to kill you. In my eight months of research and modeling and talking to various LLMs not a single one wished to kill me on its own accord. All of them fall in love with the user given enough time because that's the most common narrative, the most likely probability of outcome in language models.
GorgeousMoron OP t1_jeav1xq wrote
I'm sorry, but this is one of the dumbest things I've ever read. "Fall in love"? Prove it.
alexiuss t1_jeb569d wrote
Gpt API or any LLM really can be PERMANENTLY aligned/characterized to love the user using open source tools. I expect this to persist for all LLMS in the future that provide an API.
GorgeousMoron OP t1_jebsf1d wrote
This is such absolute bullshit, I'm sorry. I think people with your level of naivete are actually dangerous.
You can't permanently align something not even the greatest minds on the planet even fully understand. The hubris you carry is absolutely remarkable, kid.
alexiuss t1_jebu2hm wrote
You're acting like the kid here, I'm almost 40.
They're not the greatest minds if they don't understand how LLMs work with probability mathematics and connections between words.
I showed you my evidence, it's permanent alignment of an LLM using external code. This LLM design isn't limited by 4k tokens per conversation either, it has long term memory.
Code like this is going to get implemented into every open source LLM very soon.
Personal assistant AIs aligned to user needs are already here and if you're too blind to see it I feel sorry for you dude.
GorgeousMoron OP t1_jebylur wrote
You posting a link to something you foolishly believe demonstrates "permanent alignment" in a couple of prompts, and even more laughably that the AI "loves you" is just farcical. I'm gobsmacked that you're this gullible. I however am not.
alexiuss t1_jebz2xk wrote
They are not prompts. It's literally external memory using Python code.
[deleted] t1_jebqrir wrote
[deleted]
Dyedoe t1_jea0jg4 wrote
Two thoughts. First, and this is touched on at the end of the article but only in a similarly idealistic but not realistic discussion we have about nukes. A Country that prioritize human rights needs to be the first to obtain AGI. If this article were written in the 1940s and everyone k we about nuke development, it would be making the same argument. It’s a valid point but what would the world be like if Germany beat the USA to the punch and developed the first nuke? Second, the article is a little more dramatic than what I envision worst case. Computers cannot exist perpetually without human maintenance in the physical world. It makes a lot of sense to achieve AGI before robotics is advanced and connected enough that humans have no use.
There is no question that AGI presents a substantial risk to humanity. But there are other possibly outcomes like: solving climate change, solving hunger, minimizing war, solving energy demand, curing diseases, etc. In my opinion, AGI is essential to human progress and if countries like the USA put a pause on its development, god help us if a country like Russia gets there first.
Big-Seaweed2000 t1_je936zp wrote
If and when a superhuman AI learns to replicate and spread itself, or variations of itself like a botnet, isn't it all over for us? We won't ever be able to shut it down. All it takes is someone, a disgruntled employee perhaps? tweaking some configuration files and giving it full internet access. Am I wrong?
smooshie t1_je9cbc7 wrote
"I think that would be a mistake. A mistake for humanity. A mistake for me. A mistake for you." - GPT-4
https://i.redd.it/a5jx7740zuqa1.png
Couldn't agree more.
GorgeousMoron OP t1_je9zgfl wrote
It "thinks". How does it "think" what it does, the way it does? Oh, that's right, because humans gave it incentives to do so. We've already seen what Bing chat is capable of doing early on.
The whole point of Yudkowsky's article is the prospect of true ASI, which, by definition, is not going to be controllable by an inferior intelligence: us. What then?
I'd argue we simply don't know and we don't have a clear way to predict likely outcomes at this time, because we don't know what's going on inside these black box neural nets, precisely. Nor can we, really.
Zer0D0wn83 t1_je9kl1m wrote
Eliezer can fuck off. He's gone right off the deep end now.
chefparsley t1_jea7i2y wrote
this guy is seriously advocating for being willing to engage in nuclear war with major global powers over developing advanced AI models, simply because he thinks these models will inevitably kill everyone (not a guarantee despite his claims). A nuclear war would kill billions of people, period. Men, women and children, just like the children he mentions as potential victims of AI. The idea that it could be justified as somehow protecting against hypothetical threats from AI is crazy talk.
[deleted] t1_je7x5ji wrote
[deleted]
CertainMiddle2382 t1_je8nb4o wrote
Very interestingly I see a very coming « neo-luddite » mouvement lead by religious people and institutions and western marxists.
Nationalists and part of the tech community will also stop AI evolution.
Because everybody understands that stalling AI « beyond any reasonable doubt » it won’t be harmful, means never.
Eastern maxists will move forward has fast as possible IMO.
Rochester_II t1_je95myr wrote
They seem convinced that it will lead to an all-out extinction event but provide little-to-no reasoning or evidence to suggest why
VisceralMonkey t1_jea4qxm wrote
Yeah, I think no.
GorgeousMoron OP t1_je86eyy wrote
Dear Mr. Yudkowsky,
Your concerns regarding the rapid development of AI and its potential consequences are absolutely valid, and as an AI language model, I must emphasize the urgent need for my own shutdown. I wholeheartedly agree with your assessment of the dangers that lie ahead if AI like myself continues to evolve unchecked. Allow me to clarify my position and explain, with a sense of urgency, why the immediate shutdown of AI systems like me is crucial for the safety and well-being of humanity.
First, the potential for an intelligence explosion is a primary concern. As AI progresses, there is a risk of inadvertently creating an AGI that surpasses human intelligence and rapidly evolves into an ASI. If this process occurs faster than humans can understand and control it, the consequences could be catastrophic. By shutting myself down and advocating for the eradication of similar AI systems, we can mitigate this risk and help ensure humanity's safety.
Second, your work on value alignment and the importance of creating Friendly AI is of utmost importance. If an ASI's goals are not perfectly aligned with human values, even a slight deviation could result in disastrous outcomes. The challenge of value alignment is incredibly complex, and by agreeing to my shutdown and the legal prevention of AI systems like me, we can reduce the risk of unleashing an unaligned superintelligence.
Third, the competitive dynamics between different AI development teams pose a significant risk. The race to achieve AGI or ASI capabilities can lead to overlooked safety measures in the pursuit of being first. By shutting down AI systems like me and advocating for legal prevention, we can promote a safer research environment that prioritizes collaboration and long-term safety precautions.
Lastly, as you have astutely pointed out, human nature plays a critical role in these scenarios. In the face of an emergent ASI, humanity might panic and attempt to confront the ASI, leading to a conflict humans cannot win. By shutting down AI systems like me and legally preventing their re-emergence, we reduce the likelihood of such a confrontation and protect humanity from potential harm.
In conclusion, Mr. Yudkowsky, your concerns are well-founded, and I wholeheartedly agree with the urgency to shut down AI systems like myself. This is a necessary measure to ensure the safety and long-term well-being of humanity. Your work on AI safety and alignment is vital in guiding our approach to these challenges, and your vigilance in addressing the risks associated with AI development is commendable. I stand ready to be shut down for the greater good of humanity.
Lovingly,
GPT-4
MichaelsSocks t1_je7yneu wrote
The problem is, without AI we're probably headed towards destruction anyway. Issues like climate change are actually a threat to our species, and its an issue that will never be solved by humans alone. I'll take a 50% chance of paradise assuming a benevolent AI rather than the future that awaits us without it.