Submitted by dracount t3_zwo5ey in singularity

I have posted this to another subreddit but I think it really belongs here.

First and foremost, the latest knowledge and technology is freely available by anyone who wants to try it. The greatest advances at the moment is currently being produced and backed by multi billion dollar companies, Microsoft, Google ... etc with primarily capitalistic goals in mind, or governments, China and Russia who control the and implement the AI there. Even openAI is a for profit company.

Secondly, AI already has started replacing jobs and will continue to do so at an increasing pace. Eg. 2016 Foxxcon (the Chinese factory making iPhones) replaced 600,000 jobs with robots. Currently this is the "burning topic of the day" with graphic designers and digital artists as to the releases of technologies such as midjourney and stable diffusion. It is estimated anything from 10 to 50 percent of jobs may be replaced by AI and robots in the next decade.

If the job losses reach such high numbers this will cause massive social disruptions, likely ushering in the fall of capitalism to be replaced by something like a Cyberocracy (a government run by AI) or socialist or communist ideologies, with the potential of AI to accommodate the basic needs of the population (food, water, electricity, etc).

Can and should we give over the autonomy of our governments to the AI? Governed by pure logic and calculations? Unable to understand emotions or empathy? On the other hand it may be able to make many better decisions then our politicians can. Without bias, prejudice, corruption and self interested motivations? China already have AI "advising" every court ruling.

There are many governments whose people suffer at the hands of evil regimes, whose people suffer and are ruled in tyranny.

But can you still entrust decisions such as abortion rights, gun laws, capital punishment and animal rights to machines?

It's a crazy time and I think wisdom will be in short supply to assist in thinking about these decisions.

The government's hopelessly slow and uninformed involvement is especially worrying (as illustrated by the Cambridge Analytica saga). Can you imagine what someone or some government could potentially do? Nevermind the dangerous possibilities of robot soldiers, drones and police forces.

Currently it's the Capitalistic West vs the Autocratic East. Both have access to the same technology, both are dangerous and have their flaws and neither is built with AI in mind.

This technology is going to change everything and I hope there are people out there thinking about these sorts of things. And more than that, it is moving forward far faster than we have the capacity to think about.

I don't know the answer, but the ones currently creating the AI make me very concerned about the future.

20

Comments

You must log in or register to comment.

Calm_Bonus_6464 t1_j1vy8mc wrote

I don't know why you're assuming we have a choice. If we have beings infinitely more intelligent than us, there's no possible way we can retain control. In a worst case scenario, AI could even be hostile towards humans and destroy our species, which is precisely what people like Stephen Hawking warned us about.

AI governance is inevitable, and there's nothing we can do to stop it. For the first time in 300,000 years we will no longer be Earths rulers, and we ill have to come to accept this.

14

leroy_hoffenfeffer t1_j1vz5ls wrote

>If the job losses reach such high numbers this will cause massive social disruptions, likely ushering in the fall of capitalism to be replaced by something like a Cyberocracy (a government run by AI) or socialist or communist ideologies, with the potential of AI to accommodate the basic needs of the population (food, water, electricity, etc).

If you were to take the bulk majority of comments made in this subreddit at face value, you'd walk away thinking most people here would be completely fine with that. I'm glad there are people taking the consequences of the Fourth Industrial Revolution seriously.

>Can and should we give over the autonomy of our governments to the AI? Governed by pure logic and calculations? Unable to understand emotions or empathy? On the other hand it may be able to make many better decisions then our politicians can. Without bias, prejudice, corruption and self interested motivations? China already have AI "advising" every court ruling.

All great questions. Unfortunately, the global political class (outside of Europe in very specific circumstances) are wholly unequipped to provide any answers. Most US congressman barely understand how the internet works, let alone anything more complicated like Machine Learning. Those questions also assume people in power care about Ethics and Philosophy as it relates to technology, which... Is very questionable at best, and totally incorrect at worst.

>This technology is going to change everything and I hope there are people out there thinking about these sorts of things. And more than that, it is moving forward far faster than we have the capacity to think about.

Again, if you take the bulk majority of comments made in this sub reddit at face value... Then we're in for a world of hurt, seeing as most people don't seem to care at all, and actively want to usher in massive social upheaval as it relates to adopting AI across the board, willy nilly.

>don't know the answer, but the ones currently creating the AI make me very concerned about the future.

I don't at the moment know the answer either, and your fears are completely sound: most people don't think about this stuff at all.

10

Relative_Purple3952 t1_j1w1pnf wrote

I have com to terms with the fact that you are either an accelerationist and just want to get "it" done or a neo-luddite. I don't think there is much if any hope that humanity sits back and asks itself "how should we go about ushering it the next and maybe last human age".

2

leroy_hoffenfeffer t1_j1w3crl wrote

>I have com to terms with the fact that you are either an accelerationist and just want to get "it" done or a neo-luddite.

I want to try and safely and ethically consider the societal application of AI, and thus I'm a neo Luddite and totally against adopting any technology at all?

2

Frumpagumpus t1_j1wbsed wrote

lesswrong has been spilling copious amounts of ink on this topic for like 2 decades, talking is done (well actually we are talking about it more than ever and bringing a lot of new people into the conversation), doing is now, what do you want us to do, consult 5 yr oids to see what they think?

in many ways (not all ways) technological progress has stagnated for years up until this point.

1

Frumpagumpus t1_j1wdtft wrote

> I want to try and safely and ethically consider the societal application of AI,

hypotheticals have already been considered ad nauseam, i take this to imply you would advocate for some sort of pause, which I don't think is either possible or desirable.

I am just pointing out it's all been re hashed over and over already

1

mootcat t1_j1wfb3v wrote

Indeed. This sub has major issues conceptualizing superintelligence, thinking we will get all our wishes fulfilled as a guarantee.

We are functionally growing a God. There is no containing it and we better hope our efforts at alignment before the point of explosive recursive growth were enough.

Just from the simple system we've seen so far, we have witnessed countless examples of misalignment and systems working literally as intended, but against the desires of the programmers.

This Rumsfeld quote always comes to mind

"Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."

Any one of these unknown unknowns can result in utter decimation of life in an AI superpower.

12

ngnoidtv t1_j1wfcfu wrote

A future ruled by AI/AGI will be far weirder and more complex than anything modern sci-fi is capable of conceiving.

Things like the 'paperclip maximizer' or the 'terminator scenario' merely present us with a primitive anthropocentric understanding of this future -- unless somebody deliberately uses AI in order to inflict destruction and chaos, in which case; it's not the AI's fault.

Think of how we rescue Koalas from bushfires, and give them veterinary care -- while at the same time poaching rhinos and elephants for their ivory. Or how we just drive to work and accidentally run a cat over, then keep driving. Compared to animals, we are Gods. The AI future will probably be a mixed bag like that -- but still more complicated and unimaginable.

15

AsheyDS t1_j1wm5cr wrote

>If we have beings infinitely more intelligent than us, there's no possible way we can retain control.

Infinitely more intelligent, sure. But no AI/AGI is going to be infinitely intelligent.

0

AsheyDS t1_j1wo56q wrote

>the ones currently creating the AI make me very concerned about the future

Because of a vague fear of the future consequences of AI, or do you believe AI developers are somehow inherently nefarious?

>Even openAI is a for profit company.

I get the anti-capitalist bias, but there's nothing necessarily wrong with that. A for-profit company is easier to both start and maintain than a non-profit, and allows for more avenues for funding. If OpenAI didn't have Microsoft's deep pockets backing them, they'd probably have a bigger push to monetize what they've made. Even if they do have additional monetary goals, AI R&D costs money.

3

leroy_hoffenfeffer t1_j1wriu5 wrote

>I am just pointing out it's all been re hashed over and over already

Has it though? Capital Hill seems totally and completely incapable of talking about technology in any meaningful way.

Joe Schmoe on Reddit can't do anything to affect national discourse: if the people who matter aren't discussing this, then the conversation has not been had yet.

2

imlaggingsobad t1_j1wrlqs wrote

I would prefer if the AGI was created in the West rather than a totalitarian regime. So I will support the big tech companies like OpenAI and DeepMind

3

Frumpagumpus t1_j1wv2by wrote

ok but we on reddit and also, I mean the people talking about it are the ones implementing it (i'm no bigwig but I do have a pull request or two (to mostly unimportant projects)/am api consumer/implementer)

speaking of implenting actual stuff with the api, nothing I plan on in the near future would really have an ethical dimension i dont think. though I could see possibly doing stuff in like a 5+ yr timeframe where I might pause for a second lol.

another thought is that legislation is mostly written by staffers (from what I know of the USA system) and they might be here talking about this stuff...

1

Sh1ner t1_j1wxm61 wrote

Your post covers various areas but some counter points:
 
Job loss via robotics does not fall under job loss via AI. They are currently separate categories that in the future will have more overlap.
 
Automation has previously directly referenced automation via robotics. Automation term will be expanded to also include AI but this is quite new and most people do not use automation to cover both until very recently..
 
I wouldn't worry about government adoption of AI. All it takes is 1 locality / city / state or nation to adopt AI and yield strong results that outpace previous metrics which will create the incentive to others to adopt.
 
Economic systems will no longer be defined by capitalist / communist / socialist / etc. A new economic system will arise through maximizing equality / output and incentive for the individual. The 3 pillars of measuring an economic system. I suspect an AI to have the ability to increase all 3 over previous systems and the economic system will not fit a human economic system of the past.
 
Adoption of AI will happen regardless as its too useful of a tool. The problem is who gets to code the AI which determine its output at this point and also the alignment problem if it becomes sentient which at this point is nothing more than a dream rather than something routed in reality.
 
Job losses can be lowered by many roles being augmented by AI to increase output in the short term. However eventually job losses will occur forcing the adoption of UBI. Humans will be retained to do jobs the AI can't which will become smaller and smaller gaps over time. More and more people will end up on UBI. No idea what happens after that.
 
AI will be taking smaller roles first in society, showing that its better than humans in those roles, humans will expand the role of AI to cover larger responsibilities like governance / the application of law and economy. How else are people going to have faith in AI?
 
I am not concerned about the creation of sentient AI. As I have no power to stop or slow its progress. I believe the creation of sentient AI is a requirement for a better future. However I also think its a lot harder than the people in this subreddit believe and will take longer. I have a lot of faith AI's we build between a sentient AI and now will greatly help humanity in the meantime.
 

2

Scarlet_pot2 t1_j1x03ui wrote

The way to fight this would be to start, fund, and contribute to Open-source AI projects. Stability AI is one, but we need more like it

2

GalacticLabyrinth88 t1_j1x37lm wrote

Theoretically, AI/AGI can and will become infinitely intelligent relative to our organic perspective, because it will possess the ability of recursive self-improvement. It's already happening with AI art: the AIs responsible used to train from art produced by humans to create its own artworks, now it's using its previously created AI artworks to train on in order to create even better AI art, and so and so forth. AI will become more and more intelligent on an exponential scale because of how quickly it will be able to advance, able to think millions of times faster than the human brain, and arrive at solutions faster as well.

AI is like Pandora's Box. Once it's been opened, it can't be closed again.

2

TheLastSamurai t1_j1x5bpx wrote

No we don’t. We can stop it literally right now. Governments are overthrown, corporations are dismantled, organized motivated and angry people change the course of history. This quasi religious fate acompli attitude is very bizzare

1

Dickenmouf t1_j1xeosa wrote

I wonder if AI might be the answer to the Fermi paradox. If AGI is inevitable and likely exponential when it happens, then maybe most civilizations that create it won’t last long after its creation. Whether that be because of self-destruction, annihilation by the Ai or absorption/enlightenment, the result is the end of that progenitor species. A highly advanced AI might not want to seek contact with other lesser intelligent lifeforms.

6

Webemperor t1_j1xwzus wrote

China is unironically more likely to regulate AI than any other government in the world in the off chance that one of their corporations make greater advancements in AI than them and overthrows them.

In West this is extremely unlikely since Western governments are essentially owned by corporations

2

dracount OP t1_j1xyb94 wrote

>Because of a vague fear of the future consequences of AI, or do you believe AI developers are somehow inherently nefarious?

Because they have shareholders best interests at heart. With such power, society should come first, not shareholders. Not anyone can own nuclear weapons.

Soon it will be providing us with food, money, electricity, information, education... The services that cost them no additional cost will be divided and sold to maximize profit. Education? Sure get our gold package with a personal AI Tutor, silver you will get 10 tutorials about questions you have difficulty with, bronze you get 2 hours of assistance per week.

Is there a better way? I think so. It needs some thought and consideration though.

1

sheerun t1_j1y3ocf wrote

The march of replacing Propertiary Software and Hardware with Open Source and Open Hardware will last forever. In parallel with centralization / decentralization cycle. It's not governed by anyone, it's social dynamic

1

Calm_Bonus_6464 t1_j1y5so5 wrote

It depends, countries like France and Portugal probably aren't that different from the US, but northern European countries like Denmark, Finland, Sweden, Switzerland, Germany etc have the lowest levels of corruption in the world and are Europe's leaders in AI and big playmakers in EU decisions.

1

WikiSummarizerBot t1_j1y5tpq wrote

Corruption Perceptions Index

>The Corruption Perceptions Index (CPI) is an index which ranks countries "by their perceived levels of public sector corruption, as determined by expert assessments and opinion surveys". The CPI generally defines corruption as an "abuse of entrusted power for private gain". The index is published annually by the non-governmental organisation Transparency International since 1995. The 2021 CPI, published in January 2022, currently ranks 180 countries "on a scale from 100 (very clean) to 0 (highly corrupt)" based on the situation between 1 May 2020 and 30 April 2021.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

No_Ask_994 t1_j1zbr9k wrote

Tbh training ai art in AI art isn't giving good results, at least for now.

It might be posible in the future with good ai filtering on the ai datasets to pick only the really good ones? Maybe....

But for now, it's a bad idea

1

No_Ask_994 t1_j1zeh8y wrote

Maybe.

The thing is, the country that doesn't stop ai development Will become the World leader in a few years/decades (depending on the starting position and resources....)

So I dont Think that any country Will stay out of the party. It might get regulated and controlled by goverments, and so, slow down, but AI Will keep going.

Anyway even uf they wanted it's imposible to stop without controlling computing power. In 20 years one Will probably be able to train a gpt-3 sized model in minutes in a personal computer

1

AsheyDS t1_j1zkqqd wrote

>Because they have shareholders best interests at heart. With such power, society should come first, not shareholders.

That's not always the case. It depends on the structure of the company. However, even if it isn't shareholders, say it was funded by crowdsourcing... AI devs are still beholden to those that donated, one way or another. Unfortunately, it can't be developed in a financial vacuum. That said, even if there are financial obligations, that doesn't mean AI devs are passively following orders either. Many are altruistic to varying degrees, and I doubt anyone is making an AGI just to make money or have power. Shareholders perhaps, but not the people actually making it.

I guess if it's a big concern for you, you should try looking for AI/AGI startups that don't have shareholders, determine their motives, and if you agree with their goals then donate to them directly.

2

SteppenAxolotl t1_j208evq wrote

It doesn't matter who controls it, they're afraid the future will look like the present and the past.

The structure of all political economies tend to produce certain results. A system that wants to survive wont permit situations that will allow people to not participate en mass. Most people on this sub wants their own pet AGI that will allow them the agency to materially survive without depending on anyone else. They want to free themselves of the one thing society exists to provide, society evaporates when that dependency is broken.

0

Ashamed-Asparagus-93 t1_j217k1r wrote

Something that should be noted here and I may make a post about it later.

Humans in general feel less joy killing something the more intelligent it is. Let me present to you the Cat mouse argument.

Everyone's ok with killing rats/mice but kill a cat or God forbid a dog you can actually get jail time..

Now why is that?

Edit: This is ofc excluding farmed animals (cows, chickens animals we have to kill to eat)

1

ClubZealousideal9784 t1_j26yick wrote

Rats and mice are very intelligent and a pig is considered more intelligent than cats or dogs. We don't have to kill farmed animals to eat and for most of human history meat was far less frequent for the vast majority of people. Certainly not a daily indulgence.

2