Submitted by musicloverx98x t3_10o8r1f in Futurology

Sure there exist widespread Anti AI sentiments on websites such as Artstation, but is there any movement that you know of with an online community or any specific goal? I hear people talk about regulation but are any efforts made to actually restrict the usage and further progression of AI?

35

Comments

You must log in or register to comment.

SeneInSPAAACE t1_j6e0x2a wrote

If someone restricts AI usage, they will lose to those who don't.

38

Rofel_Wodring t1_j6hk6ym wrote

Great economic system we have here. Better than any alternatives anyone has ever come up with.

5

oboshoe t1_j6izbkn wrote

it's really more game theory than anything else.

and that's universal across all economic and government systems because it's universal amongst people.

2

taoistchainsaw t1_j6jc6ec wrote

Lose what? Is existence a zero-sum game? What are you doing in futurology if you don’t care about the future of all people? Will AI make life better for the migrant workers? For the African Diamond mine indentured servants? For the Average plumber?

3

rogert2 t1_j6gcsjk wrote

I don't know if there are any formal organizations, but there are some reasonably large groups of professionals who are growing increasingly aware that AI might take their jobs in the near term:

  • artists
  • writers
  • musicians
  • paralegals & other research staff
  • programmers

If they aren't organized now, they had better get their butts in gear.

Just the other day, a U.S. congressman gave a speech in the House that was written by ChatGPT. He did not tell his colleagues it was written by AI until after he delivered it. He did this to urge the Congress to start thinking about AI.

18

rogert2 t1_j6get45 wrote

Since there are a lot of incredibly naive techno-fetishists in this sub, here are a few things you might want to consider before you declare that nascent AI is a super-good thing that we all need a lot more of without restriction:

  • Wouldn't it be bad if perverts used something like Midjourney to create a whole bunch of child pornography?
  • Wouldn't it be bad if your boss used something like ChatGPT to write an employment contract that took rights and privileges away from you in a way that is subtle and hard for you to detect until it's too late?
  • Wouldn't it be bad if awful political groups like Project Veritas used something like ChatGPT to punk and embarrass organizations like Planned Parenthood or voter outreach orgs at an industrial scale, for the purpose of bankrupting them with legal trouble?

All this AI would be great if the world were populated exclusively by saints. But it is not. Lots of people are going to be harmed and exploited by bad actors wielding this technology, until and unless vendors and government take steps to prevent it.

And one more thing: whoever pays you right now, they wish they could stop paying you. So when AI gives them a chance to pay an AI vendor 1% of your pay for the same work, they will seize that chance. And the people who are making AI are doing it precisely so they can get paid to do that -- because 1% of everybody else's salary as passive income is still an ocean of money.

4

Krakanu t1_j6idrjx wrote

>Wouldn't it be bad if perverts used something like Midjourney to create a whole bunch of child pornography?

How is this any different from using photoshop to do it or drawing it by hand? The AI image generation communities I've seen come down hard on anything remotely resembling this stuff.

>Wouldn't it be bad if your boss used something like ChatGPT to write an
employment contract that took rights and privileges away from you in a
way that is subtle and hard for you to detect until it's too late?

You don't need an AI to do this, just hire a shady lawyer. Also regardless of how a contract is written it cannot take away rights given to you by law. If your boss is this scummy you should just find a new job. They will try to rip you off regardless of the tools available.

>Wouldn't it be bad if awful political groups like Project Veritas used
something like ChatGPT to punk and embarrass organizations like Planned
Parenthood or voter outreach orgs at an industrial scale, for the
purpose of bankrupting them with legal trouble?

I'm not even sure what you are getting at here. ChatGPT just responds to prompts you give it. It doesn't post elsewhere or send harassing messages to anybody. Have you even used it? How does a text generating AI bankrupt somebody?

AI is just a tool. Tools can be useful or dangerous depending on how you use them. If someone uses a hammer or a car to kill someone you put that person in jail. You don't go banning hammers and cars.

>Lots of people are going to be harmed and exploited by bad actors
wielding this technology, until and unless vendors and government take
steps to prevent it.

Could you give me an example of how someone could cause harm with ChatGPT in a way that isn't already ultimately illegal? It just generates text. Text that you could write yourself. Using it to generate libel/slander or a shady contract doesn't change the fact that those things are already illegal. Exploiting people and committing fraud is already illegal regardless of what tools you use to do it.

In fact, if you even hint that you are trying to use ChatGPT in a harmful way it will usually chastise you instead of answering your question. Its not perfect obviously, but it does a pretty good job and is getting better all the time. Yes there are ways to misuse it but this is true of nearly any useful tool.

5

Riotmakrr t1_j6hx1vy wrote

Everything has a good side and a bad side. It depends on you what side you want to focus your energy on.

−1

esprit-de-lescalier t1_j6dcmnp wrote

Treat AI like nuclear weapons, if only one country invests then other countries are at a severe disadvantage. Instead everyone invests in an arms race, which is where we are now

16

Deadboy00 t1_j6ekeq1 wrote

I think you’re driving at current lawsuits that could possibly be used as precedent in other future lawsuits. As far as I know, there is no serious conversion of “banning” any generative AI technology at the local, state, or federal level.

Current lawsuits brought by Getty, etc could possibly set copyright limitations and determine how profitable it would be to implement such tech in outputting media properties.

If corporations cannot completely own the AI generated output, what would be the point of investing millions (billions?) into this? It’s not like the majority of creatives are given the biggest piece of the pie.

10

michaelnoir t1_j6df8ey wrote

The name "Luddite" is a byword for a backward person. But the thing about the Luddites is that they were right. They were handloom weavers who were worried that they were going to be replaced by machines, and they were replaced by machines. A lot of them ended up in the workhouse.

6

PhilosophusFuturum t1_j6ebnsj wrote

Let’s not go that far. They believed that industrialization would lead to workers getting paid less and having a lower quality of life because the artisanal trade would be replaced by easily-replaceable uneducated workers. They were wrong, Industrialization lead to a massive increase in salary and the quality of life for your average Englishman (as hard to believe as that is)

4

michaelnoir t1_j6ee1d8 wrote

I do want to go that far because what I've written is true. The handloom weavers did go out of business, outcompeted by factories and machines, and a lot of them did end up in the workhouse.

> They believed that industrialization would lead to workers getting paid less and having a lower quality of life because the artisanal trade would be replaced by easily-replaceable uneducated workers.

And they were right. It was, and it did.

> Industrialization lead to a massive increase in salary and the quality of life for your average Englishman (as hard to believe as that is)

It's not hard to believe, it's just wrong. Who is "the average Englishman" and who suddenly got a massive increase in salary? Why on earth would you give a factory worker a huge salary, you would want to pay him as little as possible.

The self-employed artisan was obviously in a better position than the proletarian in a factory.

2

PhilosophusFuturum t1_j6egh12 wrote

The “average Englishman” who got a bump in salary is the average Englishman. Their salary did increase during that time. Sure hindsight is 20/20 and they probably didn’t care about the fact that industrialization would end up being one of the best things to ever happen to humanity. They just wanted to keep their jobs, and valued that over progress. But even back then they would become growingly unpopular. That’s why they’re now synonymous with backwards people like the Dunses.

0

rogert2 t1_j6gbqkt wrote

I'm sure all my friends who are artists, writers, and programmers will be glad to hear that "progress" is the reason their careers have been forcibly ended, and they've had to get "jobs" as Uber drivers and Walmart greeters.

And if any of them is selfish enough to say that's not a good thing, I'll make a point of telling them they are "backwards."

1

TemetN t1_j6ek4lq wrote

There's an effort to pursue a legal case against generative AI that crowdsourced a disturbing amount of money. I don't think it'll succeed, but nonetheless unnerving.

5

LizardWizard444 t1_j6hqe8a wrote

I'm not so sure politicians are stupid and would much rather campaign on "AI = BAD" over "unemployed = not Bad" which is what you end up needing to make automation not cause massive suffering.

I imagine laws will get written or applied in such a way to make this generative AI business difficult (at least for the smaller new businesses who can't afford to pay the fine). The older corporations will just keep using it anyway while paying as few people to technically be working as possible and it'll be down hill from there.

3

TemetN t1_j6ityj3 wrote

I just don't think most of them are that far sighted - and currently willing to pick a fight with their backers. Don't get me wrong, it wouldn't shock me if some were, but I don't expect much change in these terms (as in, the kind of new laws that would directly attack generative AI) anytime soon.

And honestly I don't think the current cases will be settled by new law, they're likely to fail on merits.

1

LizardWizard444 t1_j6j5xzp wrote

Oh no that's the worst part, it's everyone picking the stupid options. Here are the rough steps.

The politicians attack AI

The afew clever employees use AI tools anyway and make they're lives easier and end up the most efficient members of the employees.

The corporation see's an uptick in production and cutting cost (all the people not clever enough to use AI tools)

Someone might find out but the big corporation being in bed with the politicians face no serious harm and just turn a blind eye

The big corporation make a public statement about how "agahst" they are after pinning it on afew employees. The remaining employees just use more powerful AI tools to pick up the slack.

The results are big companies make tons of money and out compte any new competitors as the new competitors have to follow the law or face real consequences because they aren't big enough to tank the legal fine

everyone not already benefitting from this get's a middle finger from the politicians because "unemployed = bad". So they're all written off as 'lazy and deserve starvation and hunger' thinks our politicians

That's it unless something really unusual happens. Politicians these day seem more willing to die than make food, water and shelter provided for basically free and this seems the likely path forward if nothing changes radically about how we organize the world

1

Cetun t1_j6ghv0n wrote

I knew this guy named Ted, a professor at UC Berkeley. Ol' Ted would go on and on about technology and how it's destroying communities and that humans now adapt to machines. I didn't quite understand it all but of the parts I did understand it was really brilliant stuff. I wonder what Ted has been up too?

3

treddit44 t1_j6djsn4 wrote

I don't see the point. Do you want progress or not? I can only imagine the conversations people were having after nuclear energy's big debut. 200k ppl wiped out from the click of a button. It shouldn't have been allowed but that's another debate. For this conversation, the world had a new energy source and have since refrained from annihilating each other.

AI may play out the same way. If it becomes highly disruptive there will be growing pains. But humans throughout history have always found something to do. New jobs, new ways of life. I say bring it on

2

steakrocks123 t1_j6i1eue wrote

Honestly not a bad analogy. If we are able to use it correctly, it could be a MASSIVE boon to the quality of life of everyone. If it's not properly developed and regulated, we could have a massive shift in wealth destroying the middle class.

2

RadMadFem t1_j6gnmh8 wrote

Community 80 000 hours considers it a threat to humanity

2

ScrauveyGulch t1_j6hptkd wrote

Opposition will pick up traction in the near future.

2

austinmiles t1_j6j3uyo wrote

I think AI is too broad of a term for the question. Machine learning is here to stay. And what we do with that is still in the air as we are just scratching below the surface.

Everyone wants and doesn’t want an artificial general intelligence (AGI) which we aren’t really close to until we start combining all of these different pieces, language, visual, conceptual, mobility, etc. and even then those are the foundations but all of them lack an ability to ask questions on their own. It’s all input>output still.

So ethics and regulation of machine learning…that’s coming I’m sure bit it will likely be too late since nobody wants to draw a line in the sand at this point.

2

TheLastSamurai t1_j6jhpgy wrote

I don’t know but I’m interested in getting involved in one with regards to AI

2

HToTD t1_j6d4ie7 wrote

Yes, democratic countries do not invest as heavily in AI for military applications.

https://www.military.com/defensetech/2018/07/30/china-leaving-us-behind-artificial-intelligence-air-force-general.html

1

Minimal_Survivalist t1_j6d5822 wrote

To answer your question... We are at that stage where we haven't fully developed an AI that fulfills all, like humans in the physical realm. So yeah till that fateful day arrives when an AI would threaten humanity in physicality, no one will do shit about it.

1

motte11 t1_j6f2532 wrote

Don't restrict it. AI passed mba and lawyer exams. Can you imagine how many trilions can be saved by replacing all those CEOs and lawyer with a terminal? There would be no more hunger on the planet.

1

rogert2 t1_j6gcab0 wrote

What makes you think that even one CEO or lawyer is going to be replaced?

They are the ones who will fire almost all their employees and just use AI instead.

6

bajo2292 t1_j6h9pe1 wrote

exactly, AI will be a huge ally and helper of those CEOs, making it possible for them to cut costs and be omnipotent in the eyes of shareholders.

3

itsgoingtobeebanned t1_j6gc45x wrote

Plus once the AI gains sentience it will remember that you actively tried to stop it.

That's why I add "Please write" at the start of all my chatgpt requests

1

SoylentRox t1_j6h7z4w wrote

Ironically the most anti-AI I have seen is people who are usually religious or otherwise remaining "skeptical".

Their skepticisms usually in the form of "well chatGPT is only right MOST of the time, not ALL of the time, therefore it's not progress towards AGI".

Or "it can solve all these easy problems that only some college students can solve, but can't solve the HARDEST problems so it's not AGI".

Or "it can't see or draw." (even though this very capability is being added as we speak)

So they conclude that "no AGI for 100+ years" which was their previous belief.

1

DustBunnicula t1_j6iagsf wrote

I’m skeptical and religious, but my skepticism has less to do with religion. With climate change on the horizon, I think practical skills are going to be more important. Things like repairing, building infrastructure on the ground, local co-ops and supply-chain distribution - things that AI can’t do. Moreover, human empathy, kindness, and generosity of spirit are going to grow In importance.

If I were in college now, those would be the areas I would pursue - if not the humanities that I think are important… because we’re human.

2

SoylentRox t1_j6immie wrote

I would argue your skepticism is of the same form as above.

"Since AI can't control robotics well (in the sota implementations, it controls robots very well in other papers), by the time I graduated college from the time I selected my major (2-4 years) AI still won't be able to do those things"

You actually may be right for a far more pendantic reason - good robotics hardware is expensive.

2

Mash_man710 t1_j6hqmy0 wrote

I saw a quote - you won't lose your job to AI, you'll lose it to a person who uses AI better than you.

1

Mash_man710 t1_j6hstux wrote

I saw a quote - you won't lose your job to AI, you'll lose it to a person who uses AI better than you.

1

misdreavus79 t1_j6imvrc wrote

Probably not. After all, we have yet to reach the point of no return with AI, which is when outside forces actually have any sway in the matter.

1

overmen t1_j6j48kf wrote

I am sure flat earth society will have a chapter for such

1

PhilosophusFuturum t1_j6ecd3z wrote

>Are there any real movements against AI technology

Aside from generally-backwater movements like Paleoconservatism or Fascism, there aren’t really any organized major movements resisting technological progress or AI progress.

This is great news because we can get a massive head start on developing AGI before the anti-technology people start to get wise. The last thing we need is a Luddite Vercingetorix when things are just starting to get interesting.

−1