Viewing a single comment thread. View all comments

Emergency_Paperclip t1_irc8bih wrote

>When AI is developed or used in ways that don’t adequately take into account existing inequities or is used to make decisions for which it is inappropriate, we see real-life harms such as biased, harmful predictions leading to the wrongful arrest of Black people, jobs unfairly denied to women, and disparate targeting of children of color for removal from their families.

Basically. Machine learning tries to fit the trends of the data. Data someone has collected from the world. The world is racist. So the models tend to come out racist. Essentially this is saying that models being bigoted shouldn't be taken as a validation of and justification for bigotry.

−2