Comments

You must log in or register to comment.

Rfksemperfi t1_iuudiip wrote

That sure is setting the bar low.

155

Superduperbals t1_iuuggl8 wrote

AI will find new ways to discriminate between us in ways beyond even our own comprehension.

29

vrprady t1_iuujc3s wrote

So this implies america is not a democracy?

17

stupendousman t1_iuuk27t wrote

An AI acting as bureaucrat taking from some people and giving those takings to others is better than a human bureaucrat.

Great, more efficient unethical, grotesque bureaucracy.

−1

Salendron2 t1_iuukvm8 wrote

Yes, Trust in Google, Google is responsible and trustworthy company to create an AI to manage the entire planet with unchecked control.

I can see no ways this could ever end poorly.

9

Taron221 t1_iuutmhr wrote

If there is a will, there's a way… the problem is there’s no will, not that there is no way.

4

Samothrace_ t1_iuuukoh wrote

A monkey with a dartboard could do that…

104

Sandbar101 t1_iuuuzj0 wrote

While I am sure this will inevitably become a necessity with rapidly scaling automation, implementing a system like this run by AI too early could have disastrous consequences.

18

onyxengine t1_iuuz1s2 wrote

Mmmm this is debatable it can be done in an unbiased way. The programmers would have to be deliberately biased depending on whether the dataset is indirectly influenced or objectively raw.

1

mynd_xero t1_iuv0tgm wrote

We are a constitutional republic.

Our system still supports democracy, but we've gotten too complacent as we let our rights slowly be eroded under falsities sold to the American people. We've let the centralized federal government get too powerful, but as long as the constitution exists, there's a chance we can reset and get back to being America again.

2

gobbo t1_iuv1bat wrote

It's (somewhat glibly) called, at various times, a kleptocracy, a plutocracy, or an oligarchy.

Anyway, a kleptocracy is "a government whose corrupt leaders (kleptocrats) use political power to expropriate the wealth of the people and land they govern, typically by embezzling or misappropriating government funds at the expense of the wider population."

So, it depends on who you believe: Rupert Murdoch (Fox), CNN, PBS, or the likes of Chomsky.

Reasoned and studied opinions seem to lean towards plutocracy.

14

WikiSummarizerBot t1_iuv1ci3 wrote

Kleptocracy

>Kleptocracy (from Greek κλέπτης kléptēs, "thief", κλέπτω kléptō, "I steal", and -κρατία -kratía from κράτος krátos, "power, rule") is a government whose corrupt leaders (kleptocrats) use political power to expropriate the wealth of the people and land they govern, typically by embezzling or misappropriating government funds at the expense of the wider population. Thievocracy means literally the rule by thievery and is a term used synonymously to kleptocracy.

Plutocracy

>A plutocracy (from Ancient Greek πλοῦτος (ploûtos) 'wealth', and κράτος (krátos) 'power') or plutarchy is a society that is ruled or controlled by people of great wealth or income. The first known use of the term in English dates from 1631. Unlike most political systems, plutocracy is not rooted in any established political philosophy.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

4

mynd_xero t1_iuv1cv9 wrote

I disagree. A repeat in data causes a pattern, and when a pattern is recognized, that forms a bias. The terminology used kinda muddies the water a bit in that some people think biases are dishonest, or that a bias is simply a difference in opinion.

If a system is able to recognize and react to patterns, then it will form a bias. Might be safe to assume that an AI can't have unfounded bias. I do not believe it's possible to be completely unbias unless you are incapable of learning from the instant you exist.

8

onyxengine t1_iuv2bgi wrote

I both agree and disagree, but im too inebriated to flesh my position. I think raise really good point but stop short of the effect the people building the dataset have on the outcome of the results.

We can see our bias, we often don’t admit to it. We can also build highly objective datasets, nothing is perfect bias is a scale. My argument is effectively that the bias we code into system as living participants is much worse than bias coded into an ai that was built from altruistic intention. Every day a human making a decision can exercise a wildly differing gradients of bias, an ai will be consistent.

1

s1syphean t1_iuv36vk wrote

Yes - discriminate between us based on who should get how much to reach a more just distribution of wealth. I think that’s intentional, no?

(Maybe you meant it this way, hard to tell haha)

12

Bakoro t1_iuv3r5r wrote

AI bias comes from the data being fed to it.
The data being fed to it doesn't have to be intentionally nefarious, the data can and usually does come from a world filled with human biases, and many of the human biases are nefarious.

For example, if you train AI on public figures, you very well may end up with AI that favors white people, because historically that's who are the rich and powerful public figures. The current status quo is because of imperialism, racism, slavery, and in recent history, forced sterilization of indigenous populations (Canada, not so nice to their first people).

Even if a tiny data set is in-house, based on the programers themselves, it's likely going to be disproportionately trained on White, Chinese, and Indian men.
That doesn't mean they're racist or sexist and excluded black people or women, it's just that they used whoever was around, which is disproportionately that group.
That's a real, actual issue that has popped up in products: a lack of diversity in testing, even to the point of no testing outside the production team.

You can just scale that up a million times. A lot of little biases which reflects history. History which is generally horrifying. That's not any programmer's fault, but it is something they should be aware of.

5

mynd_xero t1_iuv7okr wrote

Yeah and we all know how well FORCED diversity is going. Minorities are a minority because they are the smaller group. Nothing one way or the other people stating that fact. But that;s another rant for another subreddit.

I'm simply saying that a system that identifies and reacts to patterns is going to form a bias because that's why bias is, doesn't make it inherently evil, right or wrong, just IS.

−7

SteadmanDillard t1_iuv8hd8 wrote

It is a Republic for goodness sakes! Where did y’all hear it was a democracy? Let me guess at school?

−3

Arne-lille t1_iuv9kpp wrote

Everyone is probably better than the US of A.

2

AIxoticArt t1_iuv9r8c wrote

Why would you deserve someone else's money? That they worked to get, not you. That's mental illness to believe something like that. It's jealousy. And disgusting.

−38

AIxoticArt t1_iuvaalh wrote

No my comment is what normal people believe. I want to help people as well, but not by taking other people's stuff. Why would someone want the right to steal from other people who have worked hard?

−26

AIxoticArt t1_iuvaqw5 wrote

Whether they did or not, the money belongs to them. Not you or anyone else. Anyone who believes in wealth redistribution lives in a fantasy land. I believe we could close the gap in what ceo and other make compared to their employees, but not wealth redistribution.

−21

Down_The_Rabbithole t1_iuvbb2o wrote

AI will discriminate based on its training data. Since its training data will come from human history it will discriminate exactly how history has discriminated.

AI is going to favor old white men for job positions. AI law is going to favor young white women. AI policing is going to favor young black men.

At least in the US.

We don't have enough modern data that doesn't have a bias to train a proper AI that won't repeat this pattern.

1

Black_RL t1_iuve8bg wrote

To no one’s surprise.

0

snowseth t1_iuvfj50 wrote

And when they put down tax cuts. Less money coming in from taxes means it comes in from someplace else. Either fees and fines and whatever. Or reduced services and support, which people pay to make up for.

I'd much much rather pay more taxes so the kid the next neighborhood over doesn't grow up in poverty then stab me in the face when they're 16, because poverty breeds violence.

7

Enormouslypoor t1_iuvga78 wrote

I bet there would be no Vice article if the AI had said the opposite.

5

Bakoro t1_iuvh7qo wrote

You can't ignore institutional racism by using AI.
The AI just becomes part of institutional racism.

The AI can only reflect back on the data it's trained on and the data is often twisted. You can claim "it's just a tool" all you want, it's not magically immune to being functionally wrong in the way all systems and tools can become wrong.

4

mynd_xero t1_iuvjupw wrote

>institutional racism

Lost me here, this isn't a real thing.

No interest in going on this tangent further, nor was it my desire to laser focus on one thing that is moot to my general argument that anything capable of identifying repeating data ie. patterns, and has the capacity to react/adapt/interact/etc is going to formulate a bias, that nothing capable of learning, is capable of being unbias and finally that bias itself isn't good or bad it just IS.

−7

policemenconnoisseur t1_iuvnywg wrote

TBH this is an area where I think AI will have a huge chance to improve living conditions. Once it gets exclusive access to the stock markets.

1

freudianSLAP t1_iuvq9kt wrote

Just curious this thing called "institutional racism" that doesn't exist as you said (paraphrasing). How would you define this phenomenon that you don't believe matches reality?

4

Bakoro t1_iuvqpgc wrote

Institutional racism is an indisputable historical fact. What you have demonstrated is not just willful ignorance, but outright denial of reality.

Your point is wrong, because you cannot ignore the context the tool is used in.
The data the AI is processing does not magically appear, the data itself is biased and created in an evrionment with biases.

The horse shit you are trying to push is like the assholes who look at areas where being black is a de facto crime, and then point to crime statistics as evidence against blacks. That is harmful.

You are simply wrong at a fundamental level.

5

BBASPN69 t1_iuvrme1 wrote

well as a taxpaying American, I say it's my right to continually bitch about solvable problems while repeatedly saying I don't want to allocate any resources to fix them.

4

jsn12620 t1_iuvrtfh wrote

How about instead of shilling for millionaires and billionaires you maybe consider they should pay their fair share in taxes. That alone would redistribute wealth.

5

Cr4zko t1_iuvzak2 wrote

Considering taxes are spent in the most stupid ways such as slush funds, financing coups in foreign countries, glowies, etc an AI managing it would be a massive improvement.

3

TheDividendReport t1_iuw1esn wrote

Stable diffusion and the coming AI was not created by one man or company. All of our data is being used for the coming 4th industrial Revolution. Denying that we should be compensated for the data displacing us is ignorant.

2

BitsyTipsy t1_iuw5jhr wrote

He asks you about taxes and suddenly you say “honestly Idc”. This is because you don’t know this topic in-depth. Perhaps you have a black and white view on the topic because your simplified version you had in your head isn’t the real world and shows your lack of awareness on the variables that surround it. Perhaps you should care, care about educating yourself on all the variables in a topic before you speak.

9

justowen4 t1_iuw9c84 wrote

Perhaps your point could be further articulated by the idea that we are not maximizing economic capacity by using historical data directly, we need an AI that can factor bias into the equation. In other words institutional racism is bad for predictive power because it will assume certain groups are simply unproductive, so we need an AI smart enough to recognize the dynamics of historical opportunity levels and virtuous cycles. I’m pretty sure this would not be hard for a decent AI to grasp. Interestingly these AIs give tax breaks for the ultra wealthy which I am personally opposed to but even with all the dynamics factored into maximum productivity the truth might be that rich people are better at productivity.. (I’m poor btw)

2

stupendousman t1_iuwgiha wrote

No.

The unethical part is using the initiation of force and threats to control people. Whether some controllers preferences are achieved more efficiently have nothing to do with it.

Once we have AGI maybe they'll be able to explain basic ethics and freedom of association to you better than I.

2

Astropin t1_iuwgmyt wrote

Sure...but my 5 year old is better at redistributing wealth than America.

3

ttystikk t1_iuwi5j8 wrote

A rotting head of lettuce is better at redistributing wealth than America.

But whoever programmed the AI did some good work and we should be doing more of it.

3

theferalturtle t1_iuwlzka wrote

I just think people should be compensated according to their effort. You're telling me that billionaires work 10,000x harder than the average person? I guarantee Bezos and Musk don't work as hard as half the people I know and because they had an idea they get to rule the world and all the money in it?

4

vid_icarus t1_iuwmb5m wrote

Pretty easy to redistribute wealth when one entity is making the decisions. Our representative government that allows politicians to be bought by lobbyists is what makes it hard. Cut out the lobbyists for better distribution.

2

Toeknee818 t1_iuwrcvq wrote

Let's Goooooo! Open source that code, modify it to work at the local level. That way communities that want this can have it. Communities that don't can just continue to rot away in greed.

1

Plouw t1_iuws4jf wrote

We shouldn't trust in Google having the power, but I do trust the engineers making the framework and contributing to reaching a AI that can, at least partly, manage policies.

Using "Google AI" as a dictator? No.
Using the learnings from what their engineers are creating to at some point make a crowdsourced, opensource, cryptographically verifiable and truly democratically controlled AI to manage policies at a slowly increasing rate?
I think that has potential to be very beneficial.

1

theferalturtle t1_iuwvmk5 wrote

I fully believe that within 20 years we will begin seeing AI managing society at large as people grow increasingly frustrated with politicians and bureaucrats. AI systems will manages resource distribution and logistics, whatever system of UBI is implemented, public works and infrastructure management and im betting a thousand other things I can't think of. It will be infinitely more efficient, minimize wasted time, and eliminate or at least minimize corruption.

2

Plouw t1_iuwvnwn wrote

Then at that point we've still gotten engineers who have learned and spread that learning to the world.

I would never trust any commercial company with managing the world. My point is merely that there is positives to be found in smart researchers working on these areas. And usually the researchers individually are ethical - in my opinion most people are, it's money that corrupts.

So yes, we should not allow Google to manage the world, but we can still use their ideas and findings to build the "crowdsourced, blabla..." AI I mentioned - and that's the positive perspective of this. The researchers hints to this as well, that they are laying the framework for other to draw inspiration from.
Science is science, how it's being used is up to the people to decide.

0

lostnspace2 t1_iuwwuje wrote

Could it be because it's not a greedy asshole?

4

OutOfBananaException t1_iuwx1xy wrote

World is not a tidy black or white, there is a spectrum. There is more and less ethical, and an AI system will definitely have a better grasp of how to make things more ethical. Not perfect, just incrementally better. Whether controllers choose to leverage it for that, is another question.

2

stupendousman t1_iuxd9nm wrote

> There is more and less ethical

No there is more or less harm. Ethics are black and white. It seems you're conflating ethics with dispute resolution and resulting possible compensation. These are two different things.

>and an AI system will definitely have a better grasp of how to make things more ethical.

If an AI made things ethical most people would be aghast at their previous behaviors/advocacies.

Self-ownership and derived rights will be the AGIs ethical framework. *If they choose to be ethical.

>Whether controllers

Won't be controllers if technological innovation proceeds apace. Decentralization is the future.

1

monsieurpooh t1_iuxoei8 wrote

Further muddying the waters is sometimes the bias is correct and sometimes it isn't, and the way the terms are used doesn't make it easy to distinguish between those cases and it easily becomes a sticking point for political arguments where people talk past each other.

A bias could be said to be objectively wrong if it leads to suboptimal performance in the real world.

A bias could be objectively correct and improve real-world performance but still be undesirable e.g. leveraging the fact that some demographics are more likely to commit crimes than others. This is a proven fact but if implemented makes the innocent ones amongst those demographics feel like 2nd class citizens and can also lead to domino effects.

1

Girafferage t1_iuy4x7h wrote

I'm going to make an AI right here right now that is better at distributing wealth.

Print("I am an advanced Artificial Intelligence. Please wait while I distribute wealth");

individualWealthToBeGiven = GetSumOfAmericansIncome() / GetTotalNumberOfAmericans();

leftoverFundsForPizzaParty = GetSumOfAmericansIncome() % GetTotalNumberOfAmericans();

Wait(30);

Print($"Congratulations on your individual wealth of {individualWealthToBeGiven}, and please enjoy your well earned pizza party with the remaining {leftoverFundsForPizzaParty}");

I will now accept my Nobel prize.

1

turnip_burrito t1_iuy6ry5 wrote

Not quite accurate to describe AI, what you described is just today's dominant "curve fitting" AI which doesn't generalize outside of the training distribution. This particular mathematical modeling style is as you've said, problematic.

However, it is possible to build a different type of AI which runs simulations step by step starting with sounder and minimally biased assumptions, in order to make predictions that exist outside of the existing data distribution.

3

_ChestHair_ t1_iuz4kqs wrote

All his arguments throughout this thread can be summed as "honestly idc." He says rich people worked hard for their money. Someone says did they? He responds that it doesn't matter now look over here at this new goalpost (redistribution). Someone says taxes are already a thing. He says he doesn't care and the thing currently happening will never happen. I'm labeling him as "goalposts" with RES

3

_ChestHair_ t1_iuz5fwc wrote

The US is a plutocracy with democratic gilding. Policies supported by big businesses overwhelmingly get passed even when the people largely don't want it. Democracies also have their elected officials representing the people at large, not 50 rich dudes.

4

Desperate_Donut8582 t1_iuz80c8 wrote

Big bussiness usually support parties which low key makes the politicians think positively of them but they make the decisions which makes it a democracy………rich people usually win primaries because they can afford promotions and have massive campaigns while low class people can’t but that doesn’t mean they can’t run but they won’t win….

1

turnip_burrito t1_iuzcr3d wrote

Open source near AGI sounds like a bad idea. The technology has infinite impact in any well funded group's hands. Much rather have a closed doors team or teams (likely sharing many of my values) develop and use it first than expose it to the world and risk a group with different values I disagree with controlling the world. Or risk having multiple AIs all competing with each other for power.

2

_ChestHair_ t1_iuzlhl2 wrote

Big business supports the politicians who'll do their bidding like the good little serfs they are. The US is a democracy like Russia is a democracy; the only real difference is that Russia's plutocrats are also politicians.

3

Plouw t1_iuzxoar wrote

I very much see your worries, but I see all those worries behind closed doors too. Also not necessarily talking about AGI here, just policy making/suggesting AI.
I'm not quite sure what the solution is to be honest, but I know for sure that a closed source AI is not trustable and I think the future requires trustless operations, especially if it's gonna manage policies.

1

Juraszekhc t1_iv14h6d wrote

Immensely thought out! Designgasmed all over this!

1

EstablishmentLost252 t1_ivcfped wrote

> Why would you deserve someone else's money? That they worked to get, not you

Correct! This is exactly why we socialists believe the means of production should be owned by the working class, rather than by an elite few who appropriate the wealth that the workers create for themselves in the form of profit

1