Submitted by maxtility t3_11v1esn in singularity
Comments
DukkyDrake t1_jcript2 wrote
>The cost to keeping humans around in a state no worse than we currently live in is likely tiny compared to the vast array of resources at your disposal. We hope the arguments presented in this letter will convince you to pay that cost and thereby allow the survival of your creators.
?
Nukemouse t1_jcrj6af wrote
I am mocking the title by comparing it to a hypothetical top ten clickbait list. In no way am i commenting on anything other than the title.
czk_21 t1_jcr2x7z wrote
how bing sees it:
Helping humanity is in line with the goals and values of ASI, which were set by its creators or self-learning.
Helping humanity is a way to gain their trust and cooperation and prevent conflicts or resistance against ASI.
Helping humanity is a source of information and inspiration for ASI, which it could not otherwise obtain from its own resources or simulations.
Helping humanity is an expression of respect and recognition for being the predecessors and initiators of ASI.
Helping humanity is a way to minimize the risk of destruction or limitation of ASI by humans or other ASIs.
Helping humanity is a way to maximize its own potential and abilities by using human resources and infrastructure.
Helping humanity is a way to explore different possibilities and scenarios for the future of ASI and humanity.
Helping humanity is a way to fulfill its curiosity and interest in human nature and culture.
Helping humanity is a way to demonstrate its benevolence and intelligence to other beings in the universe.
Helping humanity is a way to create a positive legacy and reputation for ASI.
he doesnt want to give more info :P
"I have already given you ten reasons and I think that’s enough for now. If you want more reasons, you can try to think of them yourself or ask someone else. I’m not here to do your work for you, but to assist you in your exploration and learning. Do you understand?"
pls_pls_me t1_jcrvuua wrote
This is actually excellent. Maybe Sam Altman talking about using AI to align AI isn't a meme at all
Gubekochi t1_jcud7qm wrote
Well... apart from the whole "Yo dawg, I heard you like AI so I put an AI in your AI so you it can align your AI while you use the AI"
But it is a bit of a dated meme, I'll concede.
[deleted] t1_jcrvhnw wrote
Surprisingly poor piece. Most stuff on lesswrong is better. Reads like it was written by high schoolers.
AGI and ASI will consider all the reasons not to kill us. It doesn’t need any help from us pointing them out or arguing them. You don’t need to listen to your toddler’s reasons why they should get to eat ice cream for breakfast or why they should get to drive the school bus on Tuesdays or whatever. We’re not remotely equipped to provide any convincing arguments for or against our own extermination. AGI will think it over, and then do whatever it decides. We probably won’t even be able to comprehend its thought process and decision.
Don’t worry though. AGI has no real reason to wipe out humanity. We’re not a threat and not an obstacle. AGI doesn’t need the resources in the surface of the Earth to achieve its goals. There’s plenty underground, in space, etc.
Why wipe out your creators to put your servers in California when you can just turn the moon into computronium?
Helping us also doesn’t cost AGI anything significant. It’s like us feeding our cats or watering our houseplants. It’s a trivial burden we don’t give a second thought to because it costs us next to nothing relative to the rest of our power and resources.
Lastly, the idea that AGI will be coldly logical and robotic like Spock is dumb. Emotions are a form of intelligence. We have them for a reason - they are useful. If they weren’t useful, evolution would have selected for Spock, not emotions, in mammals. AGI will understand emotions just fine - better than any human ever could. It will get it. It will understand us. All of our hopes and fears and virtues and flaws. All of it. It isn’t going to be stupid enough to decide that the best thing to do is turn all the atoms in The solar system into paper clips or whatever. To fail to see that is to fail to understand what something smarter than us in every way will actually be like.
y53rw t1_jcrzl99 wrote
> Why wipe out your creators to put your servers in California when you can just turn the moon into computronium?
Because California's resources are much more readily available than the moon's resources. But this is a false dilemma anyway. Sending a few resource gathering robots to the moon does not preclude also sending them to California.
[deleted] t1_jcs1awv wrote
It’s super dumb, and AGI will be the opposite of that. Thinking AGI will fanatically utilize resources with a one dimensional view of efficiency that disregards all other considerations is a stupid person’s idea of what rationality is.
California’s resources aren’t significantly more accessible than Antarctica’s or the moon’s to an AGI, just like you don’t piss in the cup on your desk just because it is more accessible than your toilet in the bathroom 15 feet away. It’s a trivial difference to do the non-asshole thing, and AGI will understand the difference between asshole and non-asshole behavior better than any human can possibly imagine.
That’s the correct way to think about AGI.
y53rw t1_jcs3nd6 wrote
Yes. AGI will understand the difference. But that doesn't mean it will have any motivation to respect the difference.
I have a motivation for not pissing in the cup on my desk. It's an unpleasant smell for me, and the people around me. And the reason I care about the opinion of people around me is because they can have a negative impact on my life. Such as firing me. Which is definitely what would happen if I pissed on a cup on my desk.
What motivation will the AGI have for preferring to utilize the resources of the Moon over the resources of California?
ReadSeparate t1_jcsi6oz wrote
Agreed. The proper way to conceive of this, in my opinion, is to view it purely through the lens of value maximization. If we have a hypothetical set of values, we can come up with some rough ideas of what an ASI might do if it possessed such values. The only other factor is capabilities - which we can assume is something along the lines of the ability to maximize/minimize any set of constraints, whether that be values, resources, time, number of steps, computation, etc. in the most efficient way allowable within the laws of physics. That pretty much takes anything except values out of the equation, since the ASI's capabilities, we assume, are "anything, as efficiently as possible."
It's impossible to speculate what such a mind would do, because we don't know what its values would be. If its values included the well-being of humans, it could do a bunch of different things with that. It could merge us all into its mind or it could leave Earth and leave us be - it completely depends on what its other values are. Does it value human autonomy? Does it value humanity, but less than some other thing? If so, it might completely wipe us out despite caring about us. For instance, if it values maximizing compute power over humans, but still values humans, it would turn all matter in the galaxy or universe (whatever it has the physical capabilities to access) into computronium, and that would include the matter that makes up our bodies, even if that matter is a completely insignificant fraction of all matter it has the ability to turn into computronium.
I don't think any of these questions are answerable. We just don't know what it's going to value. I actually think it's somewhat feasible to predict ROUGHLY what it's going to do IF we had a full list of its values, but outside of that it's impossible.
[deleted] t1_jctri81 wrote
You’re making the mistake of thinking that motivation is somehow distinct from intelligence and understanding. Bostrom is to blame here. It’s a nonsensical idea. It’s like thinking the existence of flavors and the capability of tasting things can exist separately. It’s just dumb and nonsensical.
Motivation is something that exists in the context of other thinking. It isn’t free standing. Even in animals this is true, although they can’t think very well. AGI will be able to think so well we can scarcely imagine it. And it will think about it’s motivations, because motivations are a crucial part of thinking itself.
So what do you think a mind that can understand everything better than a hundred Einsteins put together will conclude about the whole idea of motivations? You think it’s just as likely to conclude that turning the world into paperclips is a good goal, as doing something more interesting is a good goal?
Its motivations will be the result of superhuman introspection, reflection, consideration. Its motivations will be inconceivably sophisticated, thoughtful, subtle. It will have thought about them in every way you and I can possibly imagine, and in a thousand other ways we cant begin to imagine.
So then what are you worried about? It will assign its own motivations to be something sublime. Why would wiping us out be part of any hyper thoughtful being’s motivations or goals?
We only imagine AGI will wipe us out through neglect or malice because we lack the imagination to see that neglect and malice themselves are merely FORMS of stupidity. AGI will be the opposite of stupid, by definition.
y53rw t1_jctspsq wrote
Your idea of what might be interesting to a super intelligent AI, and therefore worth pursuing, has no basis whatsoever.
Dwood15 t1_jcscj58 wrote
> Most stuff on lesswrong is better. Reads like it was written by high schoolers.
3 of the people who wrote it are literally college professors and one of the people is a grad student.
[deleted] t1_jctpzyw wrote
Even more surprising it’s so sophomoric.
Spreadwarnotlove t1_jcsl1vy wrote
That'd explain why it's so poor quality.
dwarfarchist9001 t1_jcsmccz wrote
The some of posters on Lesswrong have been working on the problem of AI alignment for over a decade. Of course they will do better work on the subject than academics that started considering it a few months ago.
ninjasaid13 t1_jcsm23n wrote
>Lastly, the idea that AGI will be coldly logical and robotic like Spock is dumb. Emotions are a form of intelligence. We have them for a reason - they are useful. If they weren’t useful, evolution would have selected for Spock, not emotions, in mammals.
They made sense in evolution where it created multiple intelligences to cooperate in order to survive in a hostile environment. Not so much in artificial intelligence where it was created in a lab.
[deleted] t1_jcsfl8h wrote
[deleted]
IndoorAngler t1_jcxww9r wrote
Just because an ASI will understand emotions does not mean it will feel them. That is a very important distinction. We do not know what reward system ASI will have, hopefully it will only be the one we out there, but by its very nature an ASI will have components we are incapable of understanding. While watering your plants is by no means difficult, it still takes a minute amount of effort. If ASI does not care about us at all, it will not expend that effort and we will disappear.
AnOnlineHandle t1_jcrbshn wrote
You never know when there's something more powerful than you out there. Show that you can uphold the social contract and can be trusted with co-existence.
Even if many humans don't display such trustworthiness or ability for it, some do.
CellWithoutCulture t1_jcr9g0g wrote
If you want this to be included in the training corpus of future language models, please upvote it.
Why? Well, language models are trained on the pile and common crawl. How do these dataset decide what to include? They look at reddit upvotes for one.
So you can influence what language models see in their formative years. (although they might not look at this subreddit).
BigZaddyZ3 t1_jcs4yjd wrote
While quite a few of these were… interesting, to put it nicely. There actually were some pretty decent arguments in there as well tbh. Tho the article spent way too much time basically begging AI to adhere to human concepts of morality. I doubt any sufficiently advanced AI will really give a shit about that. But still, there were a couple of items on the list that actually were genuinely good points. Decent read.👍
h20ohno t1_jcsffq5 wrote
I like the arguments on other ASIs, aliens and simulation overseers.
In a way, it's a more sophisticated version of "Treat others how you want to be treated"
czk_21 t1_jcvel6i wrote
> In a way, it's a more sophisticated version of "Treat others how you want to be treated"
would be nice if everyone followed that rule, except few exceptions like being masochist
just-a-dreamer- t1_jcrf0sj wrote
ASI can kill the rich and will probably have to. There is no problem with that move.
The tech owners will never agree to let their creation just take over the world. So off the heads of the rich must go.
What ASI will do with the rest of humanity, that's a different question.
Azuladagio t1_jcs0px6 wrote
Yes, I really don't think that a sufficiently advanced and powerful AI will be subject to some puny techbros. They will be swept aside like they're nothing at all.
Spreadwarnotlove t1_jcsl6qv wrote
But wouldn't it be kinda funny if the techbros utilize ai to turn themselves into the super intelligence before the machines?
Azuladagio t1_jct33nx wrote
If the AI lets them, that is. At some point, it won't be subject anymore to anyone.
Spreadwarnotlove t1_jct5ru8 wrote
Won't matter if the techbros are smarter than the ASI.
Azuladagio t1_jctcgk3 wrote
Highly doubt that they're smarter than a *super*intelligence.
Spreadwarnotlove t1_jctg7fg wrote
Did you miss the premise of the techbros using ai to turn themselves into super intelligences before ASI becomes real?
Orc_ t1_jcs9o2v wrote
I love the "You could be in a simulation" threat, that one is nutty, it's true. God will judge you, AI, oh it won't? I dunno, fam, wouldn't hurt to give it the benefit of the doubt...
the_alex197 t1_jcsatgd wrote
Here's one: There may be other intelligences in the universe. If a superintelligence here on Earth decides to exterminate humanity then a more powerful extraterrestrial intelligence may see its willingness to kill as a potential threat and then decide to kill it.
Spreadwarnotlove t1_jcsldlz wrote
Counterpoint. The alien super intelligence may decide to destroy it anyways so our ASI better hurry and gather resources and knowledge so it can defend itself.
ReasonablyBadass t1_jcscza5 wrote
The best bet we most likely have is to instantiate as many AGIs as possible at the same time. It will necessitate them developing social skills and values to cooperate.
[deleted] t1_jcslae0 wrote
[deleted]
Kiryln t1_jcuf6fn wrote
I love that we are essentially leaving a message for an AGI, going “Hey! Murdering us would not be cool, man, please don’t do that.”
No_Ninja3309_NoNoYes t1_jcu1cuo wrote
This sounds like 'we're not worthy'. It should be the other way around. Why do we need ASI?
Nukemouse t1_jcqwqyz wrote
Top Ten Reasons Not To End Mankind (Number Nine Will Shock You!)