Nemesis_Ghost t1_j5wusqp wrote
Reply to comment by MonkeeSage in NY AG wants answers on Madison Square Garden's use of facial recognition against legal opponents by Sorin61
If 90% of the false positives are people of color or women that's still a problem. Imagine you are going to an event at MSG & get stopped by security b/c of a false positive. Security usually isn't very nice to someone they suspect of being a problem. Being stopped by security for any reason can ruin an evening, and that's before you factor in intoxication or other factors.
Kitchen-Award-3845 t1_j5x8qj7 wrote
There aren’t any false positives AFAIK, just false negatives, AKA darker skinned folks don’t get a match at all.
MonkeeSage t1_j5wzy9p wrote
What prevents a human security guard from misidentifying a woman or person of color when they are using their own eyes looking at a cctv screen and comparing against a list of people who are not allowed? The AI training data being potentially biased toward being able to better identify white males is based on training data from humans.
Viewing a single comment thread. View all comments