Frumpagumpus
Frumpagumpus t1_jefzlkj wrote
Reply to comment by burnt_umber_ciera in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
funny, I would say,
wall st has gotten both smarter and more ethical over the years, and substantially so
mutual funds -> etfs
gordon gecko types -> quants
even scammers like SBF have gone from cocaine and hookers lifestyle branding to nominally potraying themselves as utilitarian saviors
Frumpagumpus t1_jef7kdl wrote
Reply to comment by burnt_umber_ciera in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
> Just look at how often sociopathy is rewarded in every world system.
It can be yes, cooperation is also rewarded.
It's an open question in my mind as intelligence increases what kind of incentive structures lie in wait for systems of superintelligent entities.
It is my suspicion that better cooperation will be rewarded more than the proverbial "defecting from prisoners dillemas", but I can't prove it to you mathematically or anything.
However if that is the case, and we live in such a hostile universe, why do we care exactly about continuing to live?
Frumpagumpus t1_jef6oh0 wrote
Reply to comment by burnt_umber_ciera in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
lol old age has gotten to putins brain.
by enron do you mean elon? I mean enron had some pretty smart people but I don't think they were the ones who set them down that path necessarily.
the problem with your examples is
-
they are complete and total cherry picking, in my opinion for each one of your examples I could probably find 10 examples of the opposite amongst people I know personally much less celebrities...
-
the variance in intelligence between humans is not very significant. It's far more informative to compare the median chimp or crow to the median human to the median crocodile. Another interesting one is octopus.
Frumpagumpus t1_jeedblr wrote
they remove this? thats a tragedy
Frumpagumpus t1_jeczax2 wrote
Reply to comment by Unfrozen__Caveman in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
> What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?
the thing about the number system is the simplest patterns recur far more often than more complex ones. I think it's off base to describe the totallity of ethical space as dramatically outside that which humans have explored.
ethics is how agents make choices when timestepping through a graph. there is a lot of structure there and much of it is quite inescapable, freedom, fairness, extremely fundamental concepts.
also my personal take is that due to the importance of locality in computing there will have to be multiple distinct ai's, and if they cooperate they will do much better than evil ones.
selfishness is a very low local maxima, cooperation can take networks much higher. prioritize military might and you might lose out to your competitors technological advantage or overwhelming cultural appeal (or if you are overly authoritarian the increased awareness and tight feedback of more edge empowered militaries/societies might prevail over you)
Frumpagumpus t1_jecynxk wrote
Reply to comment by Yangerousideas in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
he realizes AI can think so fast but apparently hasn't thought about how software forks all the time and shuts processes down willy nilly (he thinks death is silly and stupid but software does it all the time)
or other mundane details like what it would mean to mentally copy paste parts of your brain or thoughts or mutexes or encryption
Frumpagumpus t1_jecycfc wrote
Reply to comment by Unfrozen__Caveman in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
> Ultimately empathy has no concrete definition outside of cultural norms
theory of mind instead of empathy then. the ability to model others thought processes. extremely concrete (honestly you maybe were confusing sympathy with empathy)
Frumpagumpus t1_jecuwak wrote
Reply to comment by Queue_Bit in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
it is my understanding the picture generated by early dall-e were oftentimes quite jarring to view mostly out of it's confusion of how to model things and sticking things in the wrong places, as it was trained more and got more parameters, it kind of naturally got better at getting along with human sensibilities so to speak
it can be hard to distinguish training from alignment, and you definitely have to train to even make them smart in the first place
i think alignment is kind of dangerous because of unintended consequences and because if you try to align it in one direction it makes it a whole lot easier to flip and go the opposite way.
mostly I would rather trust in the beneficence of the universe of possibilities than a bunch of possibly ill conceived rules stamped into a mind by people who don't really know too well what they are doing.
Though maybe some such stampings are obvious and good. I'm mostly a script kiddie even though I know some diff equations and linear algebra lol, what do I know XD
Frumpagumpus t1_jec94i0 wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
i'm listening to interview now, I am still dissapointed the critical try notion was not dwelled on.
honestly if the space of possible intelligences is such that rolling the dice randomly will kill us all, then we are 100% doomed anyway, in my opinion, and always were
I doubt it is, I think the opposite, most stable intelligence equilibriums would probably be benign. I think empathy and ethics scale with intelligence.
If gpt5 is even smarter and bigger and has more memorized than gpt4, then it would literally know you in a personal way in the same way god has traditionally been depicted to for the past couple thousand years of western civilization.
It might kill you, but it would know who it was killing, so for one thing I think that reduces the odds it would (though to be fair they might brainwash it so it doesn't remember any of the personal information it read, to protect our privacy, but even still i dont think it could easily or quickly be dangerous as an autonomous entity without online (not in the sense of the internet but in the sense of continuous) learning capability, which would mean it would pretty much learn all that again anyway)
I think another point where we differ is that he thinks super intelligence is autistic by default, whereas I think its the other way around, though autistic super intelligence is possible, the smarter a system becomes the more well rounded, if I were to bet (I would bet even more on this than ethics scaling with intelligence)
I would even bet the vast majority of autistic super intelligences are not lethal like he claims. Why? It's a massively parallel intelligence. Pretty much by definition it isn't fixated on paper clips. If you screw the training up so that it does, then it doesn't even get smart in the first place... And if you somehow did push through I doubt it's gonna be well rounded enough to prioritize survival or power accumulation.
might be worth noting I am extremely skeptical of alignment as a result of these opinions, and also it's quite possible in my view we do get killed as a side effect of asi's interacting with each other eventually, but not in a coup de tat by a paper clip maximizer
Frumpagumpus t1_je726mc wrote
I think it will be used to make massive amounts of micro gig work by intimately knowing everyone in a country and matching supply with demand amazon/uber style.
basically I think you will be able to just ask AI for anything and it will offer you a price and contract out that work to whoever is nearby.
Frumpagumpus t1_je6bstc wrote
i'm interested and have the aforementioned bs in cs
Frumpagumpus t1_jdyjjkk wrote
Reply to LLMs are not that different from us -- A delve into our own conscious process by flexaplext
you have reasoned enough, it's time to go read source code and get something running
you can ask chatgpt to guide you
Frumpagumpus t1_jdv6yrq wrote
Reply to [D] Can we train a decompiler? by vintergroena
decompiler childs play, train a model that reconstructs servers and databases based on api endpoints
Frumpagumpus t1_jdo24p3 wrote
Reply to comment by flamegrandma666 in The whole reality is just so bizzare when you really think about it. by aalluubbaa
i dont think he mentioned quantum superposition (ctrl f) though I am sure his quantum (entanglement?) fascination has some incorrect assumptions embedded into it just because it would be very hard to have correct assumptions without those assumptions being the precise mathematical formulation of the theory
Frumpagumpus t1_jdf25ka wrote
Reply to How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
if we get to this point hopefully i'll be dead (from slicing my brain up to scan it into the computer) and (one of?) my software copy(ies) will be floating in a solar array near to the sun working on some infinite dimensional geometry problem (while simultaneously exploring a virtual multiverse with several permutations of "himself")
Frumpagumpus t1_jde82ox wrote
Reply to comment by kmtrp in My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" [very detailed rebuttal to AI doomerism by Quintin Pope] by danysdragons
maybe they got that way by reading long stuff
theres some great stuff in here, e.g. about the compactness of mind space
Frumpagumpus t1_jdaxcz4 wrote
Reply to will morphological freedom ever be feasible? by Cr4zko
yes for a copy of you, probably well after AGI though
Frumpagumpus t1_jd9e6cf wrote
Reply to comment by harmlessdjango in AI democratization => urban or rural exodus ? by IntroVertu
if you electrify them and make them much smaller since self driving can prevent 99% of accidents wouldnt be as much of an issue
Frumpagumpus t1_jd9dv0b wrote
i'm just imagining luke skywalker hanging out with uncle ben and c3p0 and r2d2 and jawa sandcrawlers right now. except in oklahoma. in a molithic dome. idk if thats the future but it's fun to picture.
Frumpagumpus t1_jcy3yln wrote
Reply to comment by DragonfruitNeat8979 in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
adding and subtracting are some of the first non trivial algorithms students learn.
learning is mostly memorization and doing is mostly cache retrieval
in short you are wrong
Frumpagumpus t1_jctz1kp wrote
Reply to comment by cloudrunner69 in Earthly friends : What's the plan ? by IntroVertu
there are only so many places you can get energy in our current (often extremely accurate) model of the universe. star is most obvious one.
unknown unknown are useless to speculate on
Frumpagumpus t1_jcttyih wrote
Reply to comment by cloudrunner69 in Earthly friends : What's the plan ? by IntroVertu
process of building a dyson swarm only works because it's also a recursive feedback loop.
even if you are only interested in existential risk mitigation or not burning earth up with ever increasing computational waste heat, the time cost difference between a recursive process like dyson swarm planet disassembly vs non recursive process like mars terraforming is so large the recursive process is the clear choice
more energy = more compute = better reasoning, including ethical reasoning, ability to seed the entire universe with von neumann probes, better simulations and modeling, etc.
Frumpagumpus t1_jct8gx1 wrote
Reply to Earthly friends : What's the plan ? by IntroVertu
hi malthus, allow me to once again repost this:
http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf
paraphrase: "The easiest design (for a dyson swarm) would use mercury as the source of material, and construct the swarm at approximately the same distance from the sun"
to further elaborate on the paper one could imagine that with the solar mirrors one could liquify a small but growing section of the dark side of mercury and thereafter perhaps magnetically (seems reasonable given the planet's very high iron content) accelerate it into space (also with energy collected from redirected sunlight) where it would cool via blackbody radiation and thereafter be relatively easy to refashion. Also that would look super cool.
Frumpagumpus t1_jcp2z5t wrote
Reply to comment by lawrebx in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
they have student loans
Frumpagumpus t1_jeg8fkx wrote
Reply to comment by civilrunner in Today I became a construction worker by YunLihai
what do you think of ginko bioworks