Viewing a single comment thread. View all comments

Liberty2012 OP t1_jadzsar wrote

> But our general wants and needs on a large scale aren't so divorced from each other that a positive outcome for humanity is inconceivable.

In the abstract, yes; however, even slight misalignment is where all of societies conflicts arise. We have civil unrest and global war despite in the abstract we are all aligned.

The AI will have to take the abstract and resolve to something concrete. Either we tell it how to do that or we leave that decision up to the AI which brings us back to the whole concept of AI safety. How much agency does the AI have and what will happen.

0

RabidHexley t1_jae2c7j wrote

>The AI will have to take the abstract and resolve to something concrete. Either we tell it how to do that or we leave that decision up to the AI which brings us back to the whole concept of AI safety. How much agency does the AI have and what will happen.

This is only the case in a hard (or close to hard) take-off scenario where AI is trying to figure out how to form the world into an egalitarian society from the ground up given the current state.

It's possible that we achieve advanced AI, but global change happens much slower. Trending towards effective pseudo-post-scarcity via highly efficient renewable energy production and automated food production.

Individual (already highly socialized) nation-states start instituting policies that trend those societies towards egalitarian structures. These social policies start getting exported throughout the western and eventually eastern worlds. Generations pass and social unrest in totalitarian and developing nations leads to technological adoption and similar policies and social structures forming.

Socialized societal structures and use of automation increases over time which causes economic conflict to trend towards zero. Long long-term (entering into centuries) certain national boundaries begin to dissolve as the reason for those structures existence begins to be forgotten.

I'm not advocating this as a likely outcome. Just as a hypothetical, barely-reasonable scenario for how the current world can trend towards an egalitarian, post-scarcity society over a long time-span via technological progress and AI without the need for AGI to take over the world and restructure everything. Just to illustrate how there are any number of ways history can play out besides AGI takes over and either fixes or destroys the world.

2

Liberty2012 OP t1_jae75ik wrote

> Just as a hypothetical, barely-reasonable scenario

Yes, I can perceive this hypothetical. But I also have little hope that is based on any reasonable assumptions we can make about what progress would look like given that at present AI is still not an escape for our own human flaws. FYI - I expand on that in much greater detail here - https://dakara.substack.com/p/ai-the-bias-paradox

However my original position was attempting to resolve the intelligence paradox for which proponents of ASI assume will be an issue of containment at the moment of AGI. If ASI is the goal, I don't perceive a path that takes us there that escapes the logical contradiction.

1