MonkeeSage t1_j5wiku6 wrote
The stuff about possibly being illegal to ban legal opponents from entry makes sense. This last bit is silly.
> Lastly, research suggests that the Company’s use of facial recognition software may be plagued with biases and false positives against people of color and women.
1.) Nobody is being banned just from facial recognition. Human security guards contacted the woman and confirmed her identity. 2.) People in general are biased, including the police in your state, so now they can't have security guards or police either?
Nemesis_Ghost t1_j5wusqp wrote
If 90% of the false positives are people of color or women that's still a problem. Imagine you are going to an event at MSG & get stopped by security b/c of a false positive. Security usually isn't very nice to someone they suspect of being a problem. Being stopped by security for any reason can ruin an evening, and that's before you factor in intoxication or other factors.
Kitchen-Award-3845 t1_j5x8qj7 wrote
There aren’t any false positives AFAIK, just false negatives, AKA darker skinned folks don’t get a match at all.
MonkeeSage t1_j5wzy9p wrote
What prevents a human security guard from misidentifying a woman or person of color when they are using their own eyes looking at a cctv screen and comparing against a list of people who are not allowed? The AI training data being potentially biased toward being able to better identify white males is based on training data from humans.
Viewing a single comment thread. View all comments