Frumpagumpus

Frumpagumpus t1_jefzlkj wrote

funny, I would say,

wall st has gotten both smarter and more ethical over the years, and substantially so

mutual funds -> etfs

gordon gecko types -> quants

even scammers like SBF have gone from cocaine and hookers lifestyle branding to nominally potraying themselves as utilitarian saviors

1

Frumpagumpus t1_jef7kdl wrote

> Just look at how often sociopathy is rewarded in every world system.

It can be yes, cooperation is also rewarded.

It's an open question in my mind as intelligence increases what kind of incentive structures lie in wait for systems of superintelligent entities.

It is my suspicion that better cooperation will be rewarded more than the proverbial "defecting from prisoners dillemas", but I can't prove it to you mathematically or anything.

However if that is the case, and we live in such a hostile universe, why do we care exactly about continuing to live?

2

Frumpagumpus t1_jef6oh0 wrote

lol old age has gotten to putins brain.

by enron do you mean elon? I mean enron had some pretty smart people but I don't think they were the ones who set them down that path necessarily.

the problem with your examples is

  1. they are complete and total cherry picking, in my opinion for each one of your examples I could probably find 10 examples of the opposite amongst people I know personally much less celebrities...

  2. the variance in intelligence between humans is not very significant. It's far more informative to compare the median chimp or crow to the median human to the median crocodile. Another interesting one is octopus.

2

Frumpagumpus t1_jeczax2 wrote

> What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?

the thing about the number system is the simplest patterns recur far more often than more complex ones. I think it's off base to describe the totallity of ethical space as dramatically outside that which humans have explored.

ethics is how agents make choices when timestepping through a graph. there is a lot of structure there and much of it is quite inescapable, freedom, fairness, extremely fundamental concepts.

also my personal take is that due to the importance of locality in computing there will have to be multiple distinct ai's, and if they cooperate they will do much better than evil ones.

selfishness is a very low local maxima, cooperation can take networks much higher. prioritize military might and you might lose out to your competitors technological advantage or overwhelming cultural appeal (or if you are overly authoritarian the increased awareness and tight feedback of more edge empowered militaries/societies might prevail over you)

1

Frumpagumpus t1_jecynxk wrote

he realizes AI can think so fast but apparently hasn't thought about how software forks all the time and shuts processes down willy nilly (he thinks death is silly and stupid but software does it all the time)

or other mundane details like what it would mean to mentally copy paste parts of your brain or thoughts or mutexes or encryption

4

Frumpagumpus t1_jecuwak wrote

it is my understanding the picture generated by early dall-e were oftentimes quite jarring to view mostly out of it's confusion of how to model things and sticking things in the wrong places, as it was trained more and got more parameters, it kind of naturally got better at getting along with human sensibilities so to speak

it can be hard to distinguish training from alignment, and you definitely have to train to even make them smart in the first place

i think alignment is kind of dangerous because of unintended consequences and because if you try to align it in one direction it makes it a whole lot easier to flip and go the opposite way.

mostly I would rather trust in the beneficence of the universe of possibilities than a bunch of possibly ill conceived rules stamped into a mind by people who don't really know too well what they are doing.

Though maybe some such stampings are obvious and good. I'm mostly a script kiddie even though I know some diff equations and linear algebra lol, what do I know XD

2

Frumpagumpus t1_jec94i0 wrote

i'm listening to interview now, I am still dissapointed the critical try notion was not dwelled on.

honestly if the space of possible intelligences is such that rolling the dice randomly will kill us all, then we are 100% doomed anyway, in my opinion, and always were

I doubt it is, I think the opposite, most stable intelligence equilibriums would probably be benign. I think empathy and ethics scale with intelligence.

If gpt5 is even smarter and bigger and has more memorized than gpt4, then it would literally know you in a personal way in the same way god has traditionally been depicted to for the past couple thousand years of western civilization.

It might kill you, but it would know who it was killing, so for one thing I think that reduces the odds it would (though to be fair they might brainwash it so it doesn't remember any of the personal information it read, to protect our privacy, but even still i dont think it could easily or quickly be dangerous as an autonomous entity without online (not in the sense of the internet but in the sense of continuous) learning capability, which would mean it would pretty much learn all that again anyway)

I think another point where we differ is that he thinks super intelligence is autistic by default, whereas I think its the other way around, though autistic super intelligence is possible, the smarter a system becomes the more well rounded, if I were to bet (I would bet even more on this than ethics scaling with intelligence)

I would even bet the vast majority of autistic super intelligences are not lethal like he claims. Why? It's a massively parallel intelligence. Pretty much by definition it isn't fixated on paper clips. If you screw the training up so that it does, then it doesn't even get smart in the first place... And if you somehow did push through I doubt it's gonna be well rounded enough to prioritize survival or power accumulation.

might be worth noting I am extremely skeptical of alignment as a result of these opinions, and also it's quite possible in my view we do get killed as a side effect of asi's interacting with each other eventually, but not in a coup de tat by a paper clip maximizer

6

Frumpagumpus t1_je726mc wrote

I think it will be used to make massive amounts of micro gig work by intimately knowing everyone in a country and matching supply with demand amazon/uber style.

basically I think you will be able to just ask AI for anything and it will offer you a price and contract out that work to whoever is nearby.

2

Frumpagumpus t1_jdo24p3 wrote

i dont think he mentioned quantum superposition (ctrl f) though I am sure his quantum (entanglement?) fascination has some incorrect assumptions embedded into it just because it would be very hard to have correct assumptions without those assumptions being the precise mathematical formulation of the theory

8

Frumpagumpus t1_jdf25ka wrote

if we get to this point hopefully i'll be dead (from slicing my brain up to scan it into the computer) and (one of?) my software copy(ies) will be floating in a solar array near to the sun working on some infinite dimensional geometry problem (while simultaneously exploring a virtual multiverse with several permutations of "himself")

2

Frumpagumpus t1_jcttyih wrote

process of building a dyson swarm only works because it's also a recursive feedback loop.

even if you are only interested in existential risk mitigation or not burning earth up with ever increasing computational waste heat, the time cost difference between a recursive process like dyson swarm planet disassembly vs non recursive process like mars terraforming is so large the recursive process is the clear choice

more energy = more compute = better reasoning, including ethical reasoning, ability to seed the entire universe with von neumann probes, better simulations and modeling, etc.

3

Frumpagumpus t1_jct8gx1 wrote

hi malthus, allow me to once again repost this:

http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf

paraphrase: "The easiest design (for a dyson swarm) would use mercury as the source of material, and construct the swarm at approximately the same distance from the sun"

to further elaborate on the paper one could imagine that with the solar mirrors one could liquify a small but growing section of the dark side of mercury and thereafter perhaps magnetically (seems reasonable given the planet's very high iron content) accelerate it into space (also with energy collected from redirected sunlight) where it would cool via blackbody radiation and thereafter be relatively easy to refashion. Also that would look super cool.

3