Zamorak_Everknight

Zamorak_Everknight t1_ir9ke9l wrote

If we are picturing doomer scenarios, then in that context I agree that it really isn't that different from, as you said, nuclear weapons.

Having said that, we seem to have a pretty good track record of not blowing up existence with our arsenal of nukes over the last ~ century.

1

Zamorak_Everknight t1_ir9iibl wrote

>If ASI values survival it has to make the least risky choices that are available

Who programmed that into it?

In any AI agent, there is a goal state. Or multiple goal states with associated weights. It will try to get the best goal state fulfilment "score" while avoiding the constraints.

These goal states, constraints, and scoring criteria are defined by the developer(s) of the algorithm.

I highly recommend taking one of the Intro to Artificial Intelligence courses available on Coursera.

1

Zamorak_Everknight t1_iqpe66i wrote

  1. Why not? Why are you making the decision to keep living right now?
  2. Why do you think people would have to keep working? That's the whole aim of automation: machines take on human jobs.
  3. You don't even need to make a "copy of your mind" to try to do that. Look up Longevity Escape Velocity.
  4. As a nation gets more developed, the birthrate keeps dropping. Which suggests that the more a society goes towards self-actualization, the birthrate approaches zero.

As to how people will get resources without having jobs, the answer is UBI.

1