BraveNewCurrency t1_j1szem8 wrote
>How can AI be racist if it's only looking at raw data. Wouldn't it be inherently not racist? I don't know just asking.
https://en.wikipedia.org/wiki/Tay_(bot)
https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
https://hub.jhu.edu/2022/06/21/flawed-artificial-intelligence-robot-racist-sexist/
The problem is "Garbage In, Garbage Out".
Most companies have a fair amount of systemic racism already in their hiring. (Does the company really reflect the society around it, or is it mostly white dudes?)
So if you train the AI on the existing data, the AI will be biased.
But even at a biased company, humans can "do better" in the future, because humans have the ability to introspect. AI does not, so it won't get better over time. Another confounding factor is that most AIs are terrible at explaining WHY they made a decisions. It takes a lot of work to pull back the curtain and analyze the decisions the AI is making. (And nobody wants to challenge the expensive AI system that magically saves everybody work!)
sabinegirl t1_j1tjiq2 wrote
this, the ai is gonna have to exclude names and even colleges from the data sets, or it will be a mess.
BraveNewCurrency t1_j1vaoml wrote
This is the wrong solution.
There are hundreds of other things the AI can latch on to instead: males vs females write differently, they often have different hobbies, they sometimes take different classes, etc.
The problem is lazy people who want to have the computer magically pick "good people" from "bad people" when 1) the data is biased, 2) they are feeding the computer literally thousands of irrelevant data points, 3) nobody has ever proved that the data can actually differentiate.
https://www.brainyquote.com/quotes/charles_babbage_141832
What we need are more experiments to find ground truth. For example, Google did a study and found that people who went to college only had a slight advantage, and only for the first 6 months on the job. After that, they could find no difference.
If that researcher was studying thousands of irrelevant data points, that insight probably would have been lost in the noise.
cheapsexandfastfood t1_j22gqqu wrote
Seems like Google should be able to figure it out if anybody could.
They have enough resumes and enough employee review data to start with.
BraveNewCurrency t1_j273tjw wrote
Did you read their solution to the Gorilla thing? (Linked above?)
dissident_right t1_j1u220s wrote
AI is not biased. It's precisely it's lack of bias that causes AI to see patterns in society that ignorant humans would rather turn their eyes away from.
>But even at a biased company, humans can "do better" in the future, because humans have the ability to introspect.
Here 'introspect'/"do better" means 'be bullied/pressured into holding factually incorrect positions'.
Most likely the Amazon AI was as, or more proficient at selecting qualified candidates than any human HR department. It was shut down not due to inaccuracy of the algorithm at selecting qualified candidates, but rather for revealing a reality about qualified candidates that did not align with people a-priori delusions about human equality.
AmbulatingGiraffe t1_j1ugwcm wrote
This is objectively incorrect. One of the largest problems related to bias in AI is that accuracy is not distributed evenly across different groups. For instance, the COMPAS expose revealed that an algorithm being used to predict who would commit crimes had significantly higher false positive rates (saying someone would commit a crime who then didn’t) for black people. Similarly the accuracy was lower for predicting more serious violent crimes than misdemeanors or other petty offenses. It’s not enough to say that an algorithm is accurate therefore it’s not biased it’s just showing truths we don’t want. You have to look very very carefully at where exactly the model is wrong and if it’s systematically wrong for certain kinds of people/situations. There’s a reason this is one of the most active areas of research in the machine learning community. It’s an important and hard problem with no easy solution.
AboardTheBus t1_j1w7k4l wrote
How do we differentiate between bias and facts that are true but uncomfortable for people to express?
Alexstarfire t1_j1u7510 wrote
Interesting argument. Anything to back it up?
dissident_right t1_j1ub67r wrote
>Anything to back it up?
Reality? Algorithms are used extensively by thousands of companies in thousands of fields (marketing, finance, social media etc.). They are used because they work.
A good example of this would be the University of Chicago's 'crime prediction algorithm' that attempts to predict who will commit crimes within major American cities. It has been under attack for supposed bias (racial, class, sex, etc. etc.) since the outset of the project. Despite this, it is correct in 9 out of 10 cases.
Alexstarfire t1_j1uspe5 wrote
A source for how well crime predicting AIs work isn't the same as one for hiring employees. They aren't interchangeable.
dissident_right t1_j1w48yb wrote
>They aren't interchangeable.
No, but unfortunately we cannot say how well the algorithm 'would' have worked in this instance, since it was shut down before it was given the chance to see if it's selections made good employees.
The point remains - if algorithms are relied on to be accurate in 99.9% of cases, if even with something as complex as 'who will be a criminal' an algorithm can be accurate, why would this area be the only one where somehow AI is unrealible/biased?
As I said, it's the humans who possess the bias. They saw 'problematic' results and decided, a-priori, that the machine was wrong. But was it?
Dredmart t1_j1uq75f wrote
They linked you proof, and you're still full of shit. You sound exactly like a certain group that rose in the early 1900s.
TheJocktopus t1_j1v6wsk wrote
Incorrect. AI can definitely be biased. Where do you think the data that it's trained on comes from? Another AI? No, it comes from people. An AI is only as accurate as its training data.
For example, a famous example would be that AIs often come to the conclusion that black Americans are more healthy than other Americans and thus do not need as much assistance with their health. In reality, the opposite is true, but the AI doesn't realize that because it's just looking at the data given to it. That data shows that black Americans are less likely to go to the hospital, so the AI assumes that this is because there is nothing wrong with them. In reality, most humans would recognize that this is because black Americans are more likely to be poor, and can't afford to go to the hospital as frequently.
A few more examples that could happen: an AI image-generation program might be more likely to draw teachers as female, since that would be what most of the training data depicted. An AI facial recognition system might be less accurate at identifying hispanic people by their facial features because less images of hispanic people were included in the training data. An AI that suggests recommended prison sentences might give harsher sentences to black people because it was trained using previous decisions made by human judges, who tend to give harsher sentences to black people.
TL;DR: AI technology doesn't exist in a vacuum. People have biases, so AIs also have biases. AIs can have less bias if you're smart about what training data you use and what information you hide from the AI.
BraveNewCurrency t1_j1vd8a1 wrote
>Most likely the Amazon AI was as, or more proficient at selecting qualified candidates than any human HR department.
Why is that "most likely"? Citation needed.
(This reminds me of the experiment where they hired grad students to 'predict' if a light would turn on or not. The light turned on 80% of the time, but the grad students were only right about 50% of the time because they tried to predict the light. The researchers also tried monkeys, who just leaned on the button, and were right 80% of the time.
An AI is like those monkeys -- because 80% of good candidates are male, it thinks excluding female candidates will help the accuracy. But that's not actually true, and you are literally breaking the law.)
What if the truth is that a Resume alone is not enough to accurately predict if someone is going to work out in a position? What if a full job interview is actually required to find out if the person will fit in or be able to do good work? What if EVERY ranking algorithm is going to have bias, because there isn't enough data to accurately sort the resumes?
Having been a hiring manager, I have found that a large fraction of resumes contain made-up bullshit. AI is just GIGO with extra steps.
This reminds me of back in the 80's: Whenever a corporation made a mistake, they could never admit it -- instead, everyone would "blame it on the computer". That's like blaming the airplane when a bolt falls off (instead of blaming poor maintenance procedures.)
dissident_right t1_j1w4upw wrote
>Why is that "most likely"? Citation needed.
I can't provide a citation since the program was shut down before it had a chance to prove it's accuracy.
As I said, a simple observation however will demonstrate to you that just because a progressive calls an AI's observation 'problematic' (i.e. the Chicago crime prediction algorithm) that 'problematic' here is clearly not the same as inaccurate.
Again, why would you assume that an AI algorithm couldn't predict employee suitability seeing as how well algorithms predict... basically everything else about out world.
Your are simply trying to avoid a conclusion that you don't want to consider - What if men are naturally better suited to be software engineers?
BraveNewCurrency t1_j1wvngh wrote
>What if men are naturally better suited to be software engineers?
First, ignorant people proposed that exact same line of reasoning, but with firefighters instead of SW Engineers. Go read some history on how that worked out.
Second, did you read that link you sent? It claims nothing of the sort, only that "there are physical/mental differences between men and women". No shit, Sherlock. But just because the "average male is slightly taller than the average female" doesn't mean "all men are tall" nor "women can't be over 7ft tall". By the same token, "men are slightly better at task X on average" doesn't mean there aren't many women who can beat most men at that task.
Third, if we implement what you are proposing, then you are saying we should not evaluate people on "how good they are at the job", but merely on some physical attribute. Can you explain how that leads to "the best person for the job"?
​
>a simple observation however will demonstrate to you that just because a progressive calls an AI's observation 'problematic'
Haha, you keep implying that I'm ignorant (should I "do my own research?") because I point out the bias (you never addressed the constant racism by the leading AI companies) but you don't cite any data and recite 100-year-old arguments.
Wait. Are you Jordan Peterson?
dissident_right t1_j1wxp3a wrote
>First, ignorant people proposed that exact same line of reasoning, but with firefighters instead of SW Engineers. Go read some history on how that worked out.
Well... I live in a world in which 99% percent of fire fighters are male, so I am guessing the answer is "All the intelligent people conceded that bigger male muscles/stamina made men better at being firefighters and no-one made a big deal out of a sex disparency in fire fighting"?
I'm gonna assume here that you in some sort of self-generated alternate reality where women are just as capable of being fire fighters as men despite being physically weaker, smaller and lacking in stamina (relative to men)?
>doesn't mean there aren't many women who can beat most men at that task
No, but If I am designed an AI algorithm to select who will be best at 'task X' I wouldn't call the algorithm biased/poorly coded if it overwhelming selected from the group shown to be better suited for task X.
Which is, more or less what happened with the Amazon program. Kinda ironic seeing as they... rely on algorithms heavily in their marketing of products, and I am 100% sure that 'biological sex' is one of the factors those algorithms account for when deciding what products to try and nudge you towards.
>constant racism by the leading AI companies
I haven't 'addressed' it because I think the statement is markedly untrue. Many people call the U of Chicago crime prediction algorithm "racist" for disproportionately 'tagging' Black men as being at risk of being criminals/victims of crimes.
However if that algorithm is consistently accurate how can an intelligent person accuse it of having/being biased?
As I said there plenty of bias involved in AI, but the bias is very rarely on the part of the machines. The real bias comes from the humans who either A) ignore data that doesn't fit their a-prioris, or B) read the data with such a biased eye that they draw conclusions from it that doesn't actually align with what the data is showing. See: your reaction to the Stanford article.
>Are you Jordan Peterson?
No.
BraveNewCurrency t1_j276p82 wrote
>Well... I live in a world in which 99% percent of fire fighters are male
So.. Not this world, because it's more like 20% here. (And would be bigger if females weren't harassed so much.)
>no-one made a big deal out of a sex disparency in fire fighting
Sure, ignore history. You are doomed to repeat it.
> I am designed an AI algorithm to select who will be best at 'task X' I wouldn't call the algorithm biased/poorly coded if it overwhelming selected from the group shown to be better suited for task X.
Good thing nobody asks you, because that is the wrong algorithm. Maybe it's plausible short-cut if you are looking for "the best in the world". But given an arbitrary subset of people, it's not always going to be a male winner. You suck at writing algorithims.
>I haven't 'addressed' it because I think the statement is markedly untrue.
Let's summarize so far, shall we?
- You asked how an AI could be racist. I gave you links. You ignored them.
- You asserted the AI is not biased (without any evidence), and later doubled-down by saying those articles are "untrue" (again.. without any evidence)
- You claimed that 99% of firefighters are male (without evidence)
- You assert that "picking all males for a SW position is fine" (without any evidence, and despite me pointing out that it is literally illegal), then doubled down implying that you personally would preferentially hire only males even though there is no evidence that males have an advantage in SW.
You are blocked.
Viewing a single comment thread. View all comments