TheJocktopus t1_j1v6wsk wrote
Reply to comment by dissident_right in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
Incorrect. AI can definitely be biased. Where do you think the data that it's trained on comes from? Another AI? No, it comes from people. An AI is only as accurate as its training data.
For example, a famous example would be that AIs often come to the conclusion that black Americans are more healthy than other Americans and thus do not need as much assistance with their health. In reality, the opposite is true, but the AI doesn't realize that because it's just looking at the data given to it. That data shows that black Americans are less likely to go to the hospital, so the AI assumes that this is because there is nothing wrong with them. In reality, most humans would recognize that this is because black Americans are more likely to be poor, and can't afford to go to the hospital as frequently.
A few more examples that could happen: an AI image-generation program might be more likely to draw teachers as female, since that would be what most of the training data depicted. An AI facial recognition system might be less accurate at identifying hispanic people by their facial features because less images of hispanic people were included in the training data. An AI that suggests recommended prison sentences might give harsher sentences to black people because it was trained using previous decisions made by human judges, who tend to give harsher sentences to black people.
TL;DR: AI technology doesn't exist in a vacuum. People have biases, so AIs also have biases. AIs can have less bias if you're smart about what training data you use and what information you hide from the AI.
Viewing a single comment thread. View all comments