Viewing a single comment thread. View all comments

Surur t1_j6ue5ml wrote

I'm too tired to argue, so I am letting chatgpt do the talking.

An AGI (Artificial General Intelligence) may run amok if it has the following conditions:

  • Lack of alignment with human values: If the AGI has objectives or goals that are not aligned with human values, it may act in ways that are harmful to humans.

  • Unpredictable behavior: If the AGI is programmed to learn from its environment and make decisions on its own, it may behave in unexpected and harmful ways.

  • Lack of control: If there is no effective way for humans to control or intervene in the AGI's decision-making process, it may cause harm even if its objectives are aligned with human values.

  • Unforeseen consequences: Even if an AGI is well-designed, it may have unintended consequences that result in harm.

It is important to note that these are potential risks and may not necessarily occur in all cases. Developing safe and ethical AGI requires careful consideration and ongoing research and development.

1

AsheyDS t1_j6uiarq wrote

You're stating the obvious, so I don't know that there's anything to argue about (and I'm certainly not trying to). Obviously if 'X bad thing' happens or doesn't happen, we'll have a bad day. I have considered alignment and control in my post and stand by it. I think the problem you and others may have is that you're anthropomorphizing AGI when you should be considering it a sophisticated tool. Humanizing a computer doesn't mean it's not a computer anymore.

1

Surur t1_j6ul2uo wrote

The post says you dont have to anthropomorphize AGI for it to be extremely dangerous.

That danger may include trying to take over the world.

2

AsheyDS t1_j6uo5bb wrote

Why would a computer try to take over the world? The only two options are because it had an internally generated desire, or an externally input command. The former option is extremely unlikely. Could you try articulating your reasoning as to why you think it might do that?

0

Surur t1_j6uqj39 wrote

The most basic reason is that it would be an instrumental goal on the way to achieving its terminal goal.

That terminal goal may have been given to it by humans, leaving the AI to develop its own instrumental goals to achieve the terminal goal.

For any particular task, taking over the world is one potential instrumental goal.

For example, to make an omelette, taking over the world to secure an egg supply may be one potential instrumental goal.

For some terminal goal taking over the world may be a very logical instrumental goal e.g. maximise profit, ensure health for the most people, getting rid of the competition etc.

As the skill and power of an AI increases, the ability to take over the world becomes a more likely option, as it becomes easier and easier, and the cost lower and lower.

2

AsheyDS t1_j6uzur0 wrote

This is much like the paperclip scenario, it's unrealistic and incomplete. Do you really think a human-level AGI or an ASI would just accept one simple goal and operate independently from there? You think it wouldn't be smart enough to clarify things before proceeding, even if it did operate independently? Do you think it wouldn't consider the consequences of extreme actions? Would it not consider options that work within the system rather than against it? And you act like taking over the world is a practical goal that it would come up with, but is it practical to you? If it wants to make an omelette, the most likely options will come up first, like checking for eggs, and if there aren't any then go buy some, because it will understand the world that it inhabits and will know to adhere to laws and rules. If it ignores them, then it will ignore goals as well, and just not do anything.

2

Surur t1_j6v0xyu wrote

As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.

From our experience with AI systems, the shortest route to the result is what an AI optimises for, and if something is physically allowed it will be considered. Even if you think something is unlikely, it only has to happen once for it to be a problem.

Considering that humans have tried to take over the world, and they had all the same issues around the need to follow rules etc they are obviously not a real barrier.

In conclusion, even if you think something is very unlikely, this does not mean the risk is not real. Of something happens once in a million times it likely happens several times per day on our planet

1

AsheyDS t1_j6vejfr wrote

>As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.

That's not what I said or meant. You're taking things to the extremes.. It'll neither be a cold logical single-minded machine nor a human with human ambitions and desires. It'll be somewhere inbetween, and neither at the same time. In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

But I get it, there's always going to be a risk of malfunction. Researchers are aware of this, and many people are working on safety. The risk should be quite minimal, but yes you can always argue there will be risks. I still think that the bigger risk in all of this is people, and their potential for misusing AGI.

1

Surur t1_j6w14rs wrote

> In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

I believe it is much more likely we will produce a black box which is an AGI, that we then employ to do specific jobs, rather than being able to turn an AGI into a classic rule-based computer. It's likely the AGI we use to control our factory knows all about Abraham Lincoln, because it will have that background from learning to use language to communicate with us, and knowing about public holidays and all the other things we take for granted with humans. It will be able to learn and change over time, which is the point of an AGI. There will be an element of unpredictability, just like humans.

1

AsheyDS t1_j6xdpl6 wrote

>I believe it is much more likely we will produce a black box which is an AGI

Personally, I doubt that... but if current ML techniques do somehow produce AGI, then sure. I just highly doubt it will. I think that AGI will be more accessible, predictable, and able to be understood than current ML processes if it's built in a different way. But of course there are many unknowns, so nobody can say for sure how things will go.

1