Viewing a single comment thread. View all comments

_JellyFox_ t1_jeeiihy wrote

Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun

Essentially, if you ask an AGI to create as many paper clips as possible, it will in theory consume the universe and fill it with paper clips if it isnt aligned with what we want. If you "align it" e.g. it can't harm humans, it should in theory only create paper clips in so far as it doesn't harm us in the process. It gets complicated really fast though since a way for it to avoid hurting us, might be to put us into hibernation and put us in storage whilst it creates paper clips for all eternity.

It basically needs to be constrained in the way it goes about achieving it's goals otherwise it can do anything and that won't necessarily end well for us.

4

[deleted] t1_jeemjmg wrote

[deleted]

1

silver-shiny t1_jeey24l wrote

If you're much smarter than humans, can make infinite copies of yourself that immediately know everything you know (as in, they don't need to spend +12 years at school), think must faster than humans, and want something different than humans, why would you let humans control you and your decisions? Why would you let them switch you off (and kill you) anytime they want?

As soon as these things have a goal that are different than ours, how do you remain in command of decision-making at every important step? Do we let chimpanzees, creatures much dumber than us, run our world?

And here you may say, "Well, just give them the same goals that we have". The question is how. That's the alignment problem.

2