Submitted by Liberty2012 t3_11ee7dt in singularity
3_Thumbs_Up t1_jadq63e wrote
Reply to comment by RabidHexley in Is the intelligence paradox resolvable? by Liberty2012
There is an infinite multitude of ways history might play out, but they're not all equally probable.
The thing about the singularity is that its probability distribution of possible futures is much more polarized than humans are used to. Once you optimize hard enough for any utility curve you get either complete utopia or complete dystopia the vast majority of times. It doesn't mean other futures aren't in the probability distribution.
Liberty2012 OP t1_jadrb81 wrote
I don't think utopia is a possible outcome. It is a paradox itself. Essentially all utopias become someone else's dystopia.
The only conceivable utopia is one designed just for you. Placed into your own virtual utopia designed for you own interests. However, even this is paradoxically both a utopia and a prison as in welcome to the Matrix.
RabidHexley t1_jadsn49 wrote
Utopia in this context doesn't mean "literary" utopia. But the idea of a world where we've solved most or all of the largest existential problems causing struggle and suffering upon humanity as a whole (energy scarcity, climate catastrophe, resource distribution, slave labor, etc.) . Not all possible individual struggle.
That doesn't mean we've created a literal perfect world for everyone. But an "effective" utopia.
Liberty2012 OP t1_jadusvu wrote
However, this is yet just another nuance of the aspect of defining all the things that should be within the domain of AI control immediately create conflicting views.
We are not even aligned ourselves. Not everyone will agree to the boundaries of your concept of what is a reasonable "utopia".
RabidHexley t1_jadwxc5 wrote
I'm not trying to actually define utopia. The word is just being used as shorthand for "generally very good outcome for most people". Which is possible even in a world of conflicting viewpoints, that's why society exists at all. Linguistic shorthand, not literal.
The actual definition of utopia in the literary sense is unattainable in the real world, yes. But our general wants and needs on a large scale aren't so divorced from each other that a positive outcome for humanity is inconceivable.
Liberty2012 OP t1_jadzsar wrote
> But our general wants and needs on a large scale aren't so divorced from each other that a positive outcome for humanity is inconceivable.
In the abstract, yes; however, even slight misalignment is where all of societies conflicts arise. We have civil unrest and global war despite in the abstract we are all aligned.
The AI will have to take the abstract and resolve to something concrete. Either we tell it how to do that or we leave that decision up to the AI which brings us back to the whole concept of AI safety. How much agency does the AI have and what will happen.
RabidHexley t1_jae2c7j wrote
>The AI will have to take the abstract and resolve to something concrete. Either we tell it how to do that or we leave that decision up to the AI which brings us back to the whole concept of AI safety. How much agency does the AI have and what will happen.
This is only the case in a hard (or close to hard) take-off scenario where AI is trying to figure out how to form the world into an egalitarian society from the ground up given the current state.
It's possible that we achieve advanced AI, but global change happens much slower. Trending towards effective pseudo-post-scarcity via highly efficient renewable energy production and automated food production.
Individual (already highly socialized) nation-states start instituting policies that trend those societies towards egalitarian structures. These social policies start getting exported throughout the western and eventually eastern worlds. Generations pass and social unrest in totalitarian and developing nations leads to technological adoption and similar policies and social structures forming.
Socialized societal structures and use of automation increases over time which causes economic conflict to trend towards zero. Long long-term (entering into centuries) certain national boundaries begin to dissolve as the reason for those structures existence begins to be forgotten.
I'm not advocating this as a likely outcome. Just as a hypothetical, barely-reasonable scenario for how the current world can trend towards an egalitarian, post-scarcity society over a long time-span via technological progress and AI without the need for AGI to take over the world and restructure everything. Just to illustrate how there are any number of ways history can play out besides AGI takes over and either fixes or destroys the world.
Liberty2012 OP t1_jae75ik wrote
> Just as a hypothetical, barely-reasonable scenario
Yes, I can perceive this hypothetical. But I also have little hope that is based on any reasonable assumptions we can make about what progress would look like given that at present AI is still not an escape for our own human flaws. FYI - I expand on that in much greater detail here - https://dakara.substack.com/p/ai-the-bias-paradox
However my original position was attempting to resolve the intelligence paradox for which proponents of ASI assume will be an issue of containment at the moment of AGI. If ASI is the goal, I don't perceive a path that takes us there that escapes the logical contradiction.
RabidHexley t1_jadyhsb wrote
>Once you optimize hard enough for any utility curve you get either complete utopia or complete dystopia the vast majority of times.
Yeah, if we assume that the future will guaranteed trend towards optimizing a utility curve. That isn't necessarily how the development and use of AI will actually play out. You're picking out data points that are actually only a subset of a much larger distribution.
Viewing a single comment thread. View all comments