Viewing a single comment thread. View all comments

AsheyDS t1_j6vejfr wrote

>As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.

That's not what I said or meant. You're taking things to the extremes.. It'll neither be a cold logical single-minded machine nor a human with human ambitions and desires. It'll be somewhere inbetween, and neither at the same time. In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

But I get it, there's always going to be a risk of malfunction. Researchers are aware of this, and many people are working on safety. The risk should be quite minimal, but yes you can always argue there will be risks. I still think that the bigger risk in all of this is people, and their potential for misusing AGI.

1

Surur t1_j6w14rs wrote

> In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

I believe it is much more likely we will produce a black box which is an AGI, that we then employ to do specific jobs, rather than being able to turn an AGI into a classic rule-based computer. It's likely the AGI we use to control our factory knows all about Abraham Lincoln, because it will have that background from learning to use language to communicate with us, and knowing about public holidays and all the other things we take for granted with humans. It will be able to learn and change over time, which is the point of an AGI. There will be an element of unpredictability, just like humans.

1

AsheyDS t1_j6xdpl6 wrote

>I believe it is much more likely we will produce a black box which is an AGI

Personally, I doubt that... but if current ML techniques do somehow produce AGI, then sure. I just highly doubt it will. I think that AGI will be more accessible, predictable, and able to be understood than current ML processes if it's built in a different way. But of course there are many unknowns, so nobody can say for sure how things will go.

1