AmbulatingGiraffe t1_j1ugwcm wrote
Reply to comment by dissident_right in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
This is objectively incorrect. One of the largest problems related to bias in AI is that accuracy is not distributed evenly across different groups. For instance, the COMPAS expose revealed that an algorithm being used to predict who would commit crimes had significantly higher false positive rates (saying someone would commit a crime who then didn’t) for black people. Similarly the accuracy was lower for predicting more serious violent crimes than misdemeanors or other petty offenses. It’s not enough to say that an algorithm is accurate therefore it’s not biased it’s just showing truths we don’t want. You have to look very very carefully at where exactly the model is wrong and if it’s systematically wrong for certain kinds of people/situations. There’s a reason this is one of the most active areas of research in the machine learning community. It’s an important and hard problem with no easy solution.
AboardTheBus t1_j1w7k4l wrote
How do we differentiate between bias and facts that are true but uncomfortable for people to express?
Viewing a single comment thread. View all comments