Alexstarfire t1_j1uspe5 wrote
Reply to comment by dissident_right in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
A source for how well crime predicting AIs work isn't the same as one for hiring employees. They aren't interchangeable.
dissident_right t1_j1w48yb wrote
>They aren't interchangeable.
No, but unfortunately we cannot say how well the algorithm 'would' have worked in this instance, since it was shut down before it was given the chance to see if it's selections made good employees.
The point remains - if algorithms are relied on to be accurate in 99.9% of cases, if even with something as complex as 'who will be a criminal' an algorithm can be accurate, why would this area be the only one where somehow AI is unrealible/biased?
As I said, it's the humans who possess the bias. They saw 'problematic' results and decided, a-priori, that the machine was wrong. But was it?
Viewing a single comment thread. View all comments