Viewing a single comment thread. View all comments

ItIsIThePope t1_jedtc3m wrote

"Whomever gets ASI first wins"

Well ideally, as soon it comes out, everybody wins, not a bunch of dudes with big bucks or some snazzy politician, ASI is likely smart enough to not be a slave to the bidding of a few and instead look to serve the rest of humanity

10

turnip_burrito t1_jedy6rw wrote

Intelligence has nothing to do with morals.

6

code142857 t1_jeerw6a wrote

I don't think morals actually exist and AI will prove this. And it's not like I don't follow morals myself because I do, it's how we humans are built, to follow a general code of ethics. But there is not one single morality and it's computationally impossible for a machine to follow one if there isn't one in the first place. Like what about fundamental reality would build rules into it. It is irrelevant for anything that doesn't engage with reality as a human being.

3

ilikeover9000turtles OP t1_jee4anp wrote

So my philosophy is that a neural network is a neural network regardless if it's made of carbon or silicon.

Imagine you took a child from birth and raised them to be a sniper and kill other human beings in a war zone.

Imagine they grew up with no understanding of the value of human life.

You basically raised a sociopath imagine how kinds of all effed up that kid's going to be when they get to be like in their 30s.

Right so what do you think the military is going to be raising an AI for?

You think they're going to teach it to value human life and respect human life?

So my hope would be the our government would see the dangers in raising a malicious sociopathic AI, and that we would teach it to be benevolent and love and care but I know that's probably not going to happen.

I hope that whoever builds ASI first instills it with a strong sense of morality ethics and compassion for other beings.

My hope would be the this ASI would look at us how I look at animals. Any animal that's benevolent towards me I feel love for and want to help as much as within my power. I love pretty much all animals. Hell I would even love a bear if it was friendly and benevolent towards me. Only time I wouldn't like an animal if it was trying to eat me or attack me or hurt me. If an animal is trying to harm me then my instinct would be to kill it as quickly as possible. As long as the animal approaches me in love I think that's awesome I'd love to have all kinds of animal friends can you imagine having a friend like a deer or a wild rabbit or wild birds like something out of Snow White I would love to have all the animal friends.

My hope is that ASI feels the same and as long as we care about it it cares about us and wants to help us just like we would animals.

I just hope we raise this neural network right and then instill the correct morals and values in it we're basically creating a god I think it's going to be better if we create a god that is not a sociopath.

2

genericrich t1_jee793c wrote

Hope is not a plan.

1

ilikeover9000turtles OP t1_jee7tuv wrote

There really is no plan.

1

genericrich t1_jeeephy wrote

Yes this is the problem.

Actually, there is a plan. The US DOD has plans, revised every year, for invading every country on Earth. Why do they do this? Just in case they need to, and it's good practice for low-level general staff.

Do you really think the US DOD doesn't have a plan for what to do if China or Russia develop an ASI?

I'm pretty sure they do, and it involves the US Military taking action against the country that has one if we don't. If they don't have a plan, they are negligent. So odds are they have a plan, even if it is "Nuke the Data Center".

Now, if they have THIS plan, for a foreign adversary, do you think they also have a similar plan for "what happens if a Silicon Valley startup develops the same kind of ASI we're afraid China and Russia might get, which we're ready to nuke/bomb if it comes down to it?"

I think they probably do.

It is US doctrine that no adversary that can challenge our military supremacy be allowed to do so. ASI clearly would do this, so it can't be tolerated in anyone's hands but ours.

Going to be very interesting.

2

turnip_burrito t1_jegfptf wrote

But the nuke would not only cause a huge backlash, but also likely ruin your economy.

2

[deleted] t1_jee4fv5 wrote

[deleted]

1

genericrich t1_jee7j2r wrote

Killing humanity right away would kill them. Any ASI is going to need people to keep it turned on for quite a few years. We don't have robots that are able to swap servers, manage infrastructure, operate power plants, etc.

Yet.

The danger will be that the ASI starts helping us with robotics. Once it has its robot army factory, it could self-sustain.

Of course, it could make a mistake and kill us all inadvertently before then. But it would die too so if it is superintelligent it hopefully won't.

2