Viewing a single comment thread. View all comments

onyxengine t1_iuuz1s2 wrote

Mmmm this is debatable it can be done in an unbiased way. The programmers would have to be deliberately biased depending on whether the dataset is indirectly influenced or objectively raw.

1

mynd_xero t1_iuv1cv9 wrote

I disagree. A repeat in data causes a pattern, and when a pattern is recognized, that forms a bias. The terminology used kinda muddies the water a bit in that some people think biases are dishonest, or that a bias is simply a difference in opinion.

If a system is able to recognize and react to patterns, then it will form a bias. Might be safe to assume that an AI can't have unfounded bias. I do not believe it's possible to be completely unbias unless you are incapable of learning from the instant you exist.

8

Bakoro t1_iuv3r5r wrote

AI bias comes from the data being fed to it.
The data being fed to it doesn't have to be intentionally nefarious, the data can and usually does come from a world filled with human biases, and many of the human biases are nefarious.

For example, if you train AI on public figures, you very well may end up with AI that favors white people, because historically that's who are the rich and powerful public figures. The current status quo is because of imperialism, racism, slavery, and in recent history, forced sterilization of indigenous populations (Canada, not so nice to their first people).

Even if a tiny data set is in-house, based on the programers themselves, it's likely going to be disproportionately trained on White, Chinese, and Indian men.
That doesn't mean they're racist or sexist and excluded black people or women, it's just that they used whoever was around, which is disproportionately that group.
That's a real, actual issue that has popped up in products: a lack of diversity in testing, even to the point of no testing outside the production team.

You can just scale that up a million times. A lot of little biases which reflects history. History which is generally horrifying. That's not any programmer's fault, but it is something they should be aware of.

5

mynd_xero t1_iuv7okr wrote

Yeah and we all know how well FORCED diversity is going. Minorities are a minority because they are the smaller group. Nothing one way or the other people stating that fact. But that;s another rant for another subreddit.

I'm simply saying that a system that identifies and reacts to patterns is going to form a bias because that's why bias is, doesn't make it inherently evil, right or wrong, just IS.

−7

Bakoro t1_iuvh7qo wrote

You can't ignore institutional racism by using AI.
The AI just becomes part of institutional racism.

The AI can only reflect back on the data it's trained on and the data is often twisted. You can claim "it's just a tool" all you want, it's not magically immune to being functionally wrong in the way all systems and tools can become wrong.

4

mynd_xero t1_iuvjupw wrote

>institutional racism

Lost me here, this isn't a real thing.

No interest in going on this tangent further, nor was it my desire to laser focus on one thing that is moot to my general argument that anything capable of identifying repeating data ie. patterns, and has the capacity to react/adapt/interact/etc is going to formulate a bias, that nothing capable of learning, is capable of being unbias and finally that bias itself isn't good or bad it just IS.

−7

Bakoro t1_iuvqpgc wrote

Institutional racism is an indisputable historical fact. What you have demonstrated is not just willful ignorance, but outright denial of reality.

Your point is wrong, because you cannot ignore the context the tool is used in.
The data the AI is processing does not magically appear, the data itself is biased and created in an evrionment with biases.

The horse shit you are trying to push is like the assholes who look at areas where being black is a de facto crime, and then point to crime statistics as evidence against blacks. That is harmful.

You are simply wrong at a fundamental level.

5

justowen4 t1_iuw9c84 wrote

Perhaps your point could be further articulated by the idea that we are not maximizing economic capacity by using historical data directly, we need an AI that can factor bias into the equation. In other words institutional racism is bad for predictive power because it will assume certain groups are simply unproductive, so we need an AI smart enough to recognize the dynamics of historical opportunity levels and virtuous cycles. I’m pretty sure this would not be hard for a decent AI to grasp. Interestingly these AIs give tax breaks for the ultra wealthy which I am personally opposed to but even with all the dynamics factored into maximum productivity the truth might be that rich people are better at productivity.. (I’m poor btw)

2

freudianSLAP t1_iuvq9kt wrote

Just curious this thing called "institutional racism" that doesn't exist as you said (paraphrasing). How would you define this phenomenon that you don't believe matches reality?

4

TistedLogic t1_iuvr439 wrote

What makes you think institutional racism isn't a real thing?

2

onyxengine t1_iuv2bgi wrote

I both agree and disagree, but im too inebriated to flesh my position. I think raise really good point but stop short of the effect the people building the dataset have on the outcome of the results.

We can see our bias, we often don’t admit to it. We can also build highly objective datasets, nothing is perfect bias is a scale. My argument is effectively that the bias we code into system as living participants is much worse than bias coded into an ai that was built from altruistic intention. Every day a human making a decision can exercise a wildly differing gradients of bias, an ai will be consistent.

1

monsieurpooh t1_iuxoei8 wrote

Further muddying the waters is sometimes the bias is correct and sometimes it isn't, and the way the terms are used doesn't make it easy to distinguish between those cases and it easily becomes a sticking point for political arguments where people talk past each other.

A bias could be said to be objectively wrong if it leads to suboptimal performance in the real world.

A bias could be objectively correct and improve real-world performance but still be undesirable e.g. leveraging the fact that some demographics are more likely to commit crimes than others. This is a proven fact but if implemented makes the innocent ones amongst those demographics feel like 2nd class citizens and can also lead to domino effects.

1