Unfrozen__Caveman OP t1_jecucvk wrote
Reply to comment by Queue_Bit in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
There's a lot in your post but I just wanted to provide a counter opinion to this part:
> I fundamentally think that empathy and ethics scale with intelligence. I think every type of intelligence we've ever seen has followed this path. I will reconcile that artificial intelligence is likely to be alien to us in fundamental ways, but my intuition that intelligence is directly linked to a general empathy is backed up by real world evidence.
I think as a whole species, if we use humans as an example then yes, this is true on the surface. But ethics and empathy aren't even consistent among our different cultures. Some cultures value certain animals that other cultures don't care about; some cultures believe all of us are equal while others execute anyone who strays outside of their sexual norms; if you fill a room with 10 people and tell them 5 need to die or everyone dies, what happens to empathy? Why are there cannibals? Why are there serial killers? Why are there dog lovers or ant lovers or bee keepers?
Ultimately empathy has no concrete definition outside of cultural norms. A goat doesn't empathize with the grass it eats and humans don't even empathize with each other most of the time, let alone follow ethics. And that doesn't even address the main problem with your premise, which is that an AGI isn't biological intelligence - most likely it's going to be unlike anything we've ever seen.
What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?
Like individual humans, I believe the most likely thing it's going to empathize with and align with is itself, not us. Maybe it will think we're cute and keep us as pets, or use us as food for biological machines, or maybe it'll help us make really nice spreadsheets for marketing firms. Who knows...
Frumpagumpus t1_jecycfc wrote
> Ultimately empathy has no concrete definition outside of cultural norms
theory of mind instead of empathy then. the ability to model others thought processes. extremely concrete (honestly you maybe were confusing sympathy with empathy)
Frumpagumpus t1_jeczax2 wrote
> What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?
the thing about the number system is the simplest patterns recur far more often than more complex ones. I think it's off base to describe the totallity of ethical space as dramatically outside that which humans have explored.
ethics is how agents make choices when timestepping through a graph. there is a lot of structure there and much of it is quite inescapable, freedom, fairness, extremely fundamental concepts.
also my personal take is that due to the importance of locality in computing there will have to be multiple distinct ai's, and if they cooperate they will do much better than evil ones.
selfishness is a very low local maxima, cooperation can take networks much higher. prioritize military might and you might lose out to your competitors technological advantage or overwhelming cultural appeal (or if you are overly authoritarian the increased awareness and tight feedback of more edge empowered militaries/societies might prevail over you)
Viewing a single comment thread. View all comments