Viewing a single comment thread. View all comments

mootcat t1_izh3vne wrote

It isn't that our demise is particularly desired, it's that it is ultimately an inconsequential side effect of AI exponentially scaling an objective.

Max Tegmark (I beleive) compares it to us worrying about destroying an ant colony while constructing a highway. It isn't even a consideration.

1

crua9 t1_izhiomb wrote

Here is a back and forward

Person A: Max Tegmark (I beleive) compares it to us worrying about destroying an ant colony while constructing a highway. It isn't even a consideration

Person B: Do you like your fridge or should we go back to ice boxes? Keep in mind fridges save lives because you can store medical stuff.

Person A: wants fridges over ice boxes

Person B: the biggest industry in the world and history was the ice industry. What killed it was the fridge.

So pick killing the biggest industry humans ever known. But in return countless people can get medical stuff, food can go to more places, and so on. Or keep that industry, all the people working it in the job, etc. But have everyone who is living today thanks due to the fridge dead.

There is always outcomes to every choice. Sometimes good and sometimes bad. But a simple risk assessment shows way more lives and way more good will come with AGI. And like the fridge. Even if you delay it, it will still come out at some point.

1

mootcat t1_izhwgxu wrote

Are you not aware of the existential risk that AGI/superintelligence poses?

I'm obviously pro AI, but it's also the greatest risk to humanity and all of life.

1

crua9 t1_izhwoib wrote

Ya it is a risk. But this is my viewpoint

  1. It makes our life better (good)
  2. It doesn't really change anything (whatever)
  3. It makes things worse (well I guess now is a good time to die)
  4. It kills us all (we all die one day anyways, and it isn't like my life is getting better)
1