Frumpagumpus t1_jeczax2 wrote
Reply to comment by Unfrozen__Caveman in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
> What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?
the thing about the number system is the simplest patterns recur far more often than more complex ones. I think it's off base to describe the totallity of ethical space as dramatically outside that which humans have explored.
ethics is how agents make choices when timestepping through a graph. there is a lot of structure there and much of it is quite inescapable, freedom, fairness, extremely fundamental concepts.
also my personal take is that due to the importance of locality in computing there will have to be multiple distinct ai's, and if they cooperate they will do much better than evil ones.
selfishness is a very low local maxima, cooperation can take networks much higher. prioritize military might and you might lose out to your competitors technological advantage or overwhelming cultural appeal (or if you are overly authoritarian the increased awareness and tight feedback of more edge empowered militaries/societies might prevail over you)
Viewing a single comment thread. View all comments