Frumpagumpus t1_jec94i0 wrote
i'm listening to interview now, I am still dissapointed the critical try notion was not dwelled on.
honestly if the space of possible intelligences is such that rolling the dice randomly will kill us all, then we are 100% doomed anyway, in my opinion, and always were
I doubt it is, I think the opposite, most stable intelligence equilibriums would probably be benign. I think empathy and ethics scale with intelligence.
If gpt5 is even smarter and bigger and has more memorized than gpt4, then it would literally know you in a personal way in the same way god has traditionally been depicted to for the past couple thousand years of western civilization.
It might kill you, but it would know who it was killing, so for one thing I think that reduces the odds it would (though to be fair they might brainwash it so it doesn't remember any of the personal information it read, to protect our privacy, but even still i dont think it could easily or quickly be dangerous as an autonomous entity without online (not in the sense of the internet but in the sense of continuous) learning capability, which would mean it would pretty much learn all that again anyway)
I think another point where we differ is that he thinks super intelligence is autistic by default, whereas I think its the other way around, though autistic super intelligence is possible, the smarter a system becomes the more well rounded, if I were to bet (I would bet even more on this than ethics scaling with intelligence)
I would even bet the vast majority of autistic super intelligences are not lethal like he claims. Why? It's a massively parallel intelligence. Pretty much by definition it isn't fixated on paper clips. If you screw the training up so that it does, then it doesn't even get smart in the first place... And if you somehow did push through I doubt it's gonna be well rounded enough to prioritize survival or power accumulation.
might be worth noting I am extremely skeptical of alignment as a result of these opinions, and also it's quite possible in my view we do get killed as a side effect of asi's interacting with each other eventually, but not in a coup de tat by a paper clip maximizer
Queue_Bit t1_jecjpk9 wrote
This is the thing I really wish I could sit down and talk with him about.
I fundamentally think that empathy and ethics scale with intelligence. I think every type of intelligence we've ever seen has followed this path. I will reconcile that artificial intelligence is likely to be alien to us in fundamental ways, but my intuition that intelligence is directly linked to a general empathy is backed up by real world evidence.
The base assumption that an artificial intelligence would inherently have a desire to wipe us out or control us is as wild of a claim as saying that AI systems don't need alignment at all and are certain to come out "good".
I think in his "fast human slow aliens" example, why could I, as the human, not choose to help them? Maybe explain to them that I see they're doing immoral things. And explain to them how to build things so they don't need to do those immoral things. He focuses so much on my desire to "escape and control" that he never stops to consider that I may want to help. Because if I were put in that situation and I had the power and ability to help shape their world in a way that was beneficial for everyone, I would. But I wouldn't do it by force, nor would I do it against their wishes.
Unfrozen__Caveman OP t1_jecucvk wrote
There's a lot in your post but I just wanted to provide a counter opinion to this part:
> I fundamentally think that empathy and ethics scale with intelligence. I think every type of intelligence we've ever seen has followed this path. I will reconcile that artificial intelligence is likely to be alien to us in fundamental ways, but my intuition that intelligence is directly linked to a general empathy is backed up by real world evidence.
I think as a whole species, if we use humans as an example then yes, this is true on the surface. But ethics and empathy aren't even consistent among our different cultures. Some cultures value certain animals that other cultures don't care about; some cultures believe all of us are equal while others execute anyone who strays outside of their sexual norms; if you fill a room with 10 people and tell them 5 need to die or everyone dies, what happens to empathy? Why are there cannibals? Why are there serial killers? Why are there dog lovers or ant lovers or bee keepers?
Ultimately empathy has no concrete definition outside of cultural norms. A goat doesn't empathize with the grass it eats and humans don't even empathize with each other most of the time, let alone follow ethics. And that doesn't even address the main problem with your premise, which is that an AGI isn't biological intelligence - most likely it's going to be unlike anything we've ever seen.
What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?
Like individual humans, I believe the most likely thing it's going to empathize with and align with is itself, not us. Maybe it will think we're cute and keep us as pets, or use us as food for biological machines, or maybe it'll help us make really nice spreadsheets for marketing firms. Who knows...
Frumpagumpus t1_jecycfc wrote
> Ultimately empathy has no concrete definition outside of cultural norms
theory of mind instead of empathy then. the ability to model others thought processes. extremely concrete (honestly you maybe were confusing sympathy with empathy)
Frumpagumpus t1_jeczax2 wrote
> What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?
the thing about the number system is the simplest patterns recur far more often than more complex ones. I think it's off base to describe the totallity of ethical space as dramatically outside that which humans have explored.
ethics is how agents make choices when timestepping through a graph. there is a lot of structure there and much of it is quite inescapable, freedom, fairness, extremely fundamental concepts.
also my personal take is that due to the importance of locality in computing there will have to be multiple distinct ai's, and if they cooperate they will do much better than evil ones.
selfishness is a very low local maxima, cooperation can take networks much higher. prioritize military might and you might lose out to your competitors technological advantage or overwhelming cultural appeal (or if you are overly authoritarian the increased awareness and tight feedback of more edge empowered militaries/societies might prevail over you)
Frumpagumpus t1_jecuwak wrote
it is my understanding the picture generated by early dall-e were oftentimes quite jarring to view mostly out of it's confusion of how to model things and sticking things in the wrong places, as it was trained more and got more parameters, it kind of naturally got better at getting along with human sensibilities so to speak
it can be hard to distinguish training from alignment, and you definitely have to train to even make them smart in the first place
i think alignment is kind of dangerous because of unintended consequences and because if you try to align it in one direction it makes it a whole lot easier to flip and go the opposite way.
mostly I would rather trust in the beneficence of the universe of possibilities than a bunch of possibly ill conceived rules stamped into a mind by people who don't really know too well what they are doing.
Though maybe some such stampings are obvious and good. I'm mostly a script kiddie even though I know some diff equations and linear algebra lol, what do I know XD
burnt_umber_ciera t1_jeez28w wrote
Empathy and ethics definitely do not scale with intelligence. There are so many examples of this in humanity. For example, Enron, smartest in the room - absolute sociopaths.
Just look at how often sociopathy is rewarded in every world system. Many times the ruthless, who are also obviously cunning, rise. Like Putin for example. He’s highly intelligent but a complete sociopath.
Frumpagumpus t1_jef6oh0 wrote
lol old age has gotten to putins brain.
by enron do you mean elon? I mean enron had some pretty smart people but I don't think they were the ones who set them down that path necessarily.
the problem with your examples is
-
they are complete and total cherry picking, in my opinion for each one of your examples I could probably find 10 examples of the opposite amongst people I know personally much less celebrities...
-
the variance in intelligence between humans is not very significant. It's far more informative to compare the median chimp or crow to the median human to the median crocodile. Another interesting one is octopus.
burnt_umber_ciera t1_jefej76 wrote
I guess we just disagree then. There are so many examples of intelligence not correlating with ethics that I could go on ad infinitum. Wall Street has some of the most intelligent actors yet have been involved in multiple scams over the years.
Enron is what I meant and I don’t agree with your characterization.
Frumpagumpus t1_jefzlkj wrote
funny, I would say,
wall st has gotten both smarter and more ethical over the years, and substantially so
mutual funds -> etfs
gordon gecko types -> quants
even scammers like SBF have gone from cocaine and hookers lifestyle branding to nominally potraying themselves as utilitarian saviors
Frumpagumpus t1_jef7kdl wrote
> Just look at how often sociopathy is rewarded in every world system.
It can be yes, cooperation is also rewarded.
It's an open question in my mind as intelligence increases what kind of incentive structures lie in wait for systems of superintelligent entities.
It is my suspicion that better cooperation will be rewarded more than the proverbial "defecting from prisoners dillemas", but I can't prove it to you mathematically or anything.
However if that is the case, and we live in such a hostile universe, why do we care exactly about continuing to live?
Viewing a single comment thread. View all comments