Alexstarfire t1_j1u7510 wrote
Reply to comment by dissident_right in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
Interesting argument. Anything to back it up?
dissident_right t1_j1ub67r wrote
>Anything to back it up?
Reality? Algorithms are used extensively by thousands of companies in thousands of fields (marketing, finance, social media etc.). They are used because they work.
A good example of this would be the University of Chicago's 'crime prediction algorithm' that attempts to predict who will commit crimes within major American cities. It has been under attack for supposed bias (racial, class, sex, etc. etc.) since the outset of the project. Despite this, it is correct in 9 out of 10 cases.
Alexstarfire t1_j1uspe5 wrote
A source for how well crime predicting AIs work isn't the same as one for hiring employees. They aren't interchangeable.
dissident_right t1_j1w48yb wrote
>They aren't interchangeable.
No, but unfortunately we cannot say how well the algorithm 'would' have worked in this instance, since it was shut down before it was given the chance to see if it's selections made good employees.
The point remains - if algorithms are relied on to be accurate in 99.9% of cases, if even with something as complex as 'who will be a criminal' an algorithm can be accurate, why would this area be the only one where somehow AI is unrealible/biased?
As I said, it's the humans who possess the bias. They saw 'problematic' results and decided, a-priori, that the machine was wrong. But was it?
Viewing a single comment thread. View all comments