Viewing a single comment thread. View all comments

Nemesis_Ghost t1_j5wusqp wrote

If 90% of the false positives are people of color or women that's still a problem. Imagine you are going to an event at MSG & get stopped by security b/c of a false positive. Security usually isn't very nice to someone they suspect of being a problem. Being stopped by security for any reason can ruin an evening, and that's before you factor in intoxication or other factors.

9

Kitchen-Award-3845 t1_j5x8qj7 wrote

There aren’t any false positives AFAIK, just false negatives, AKA darker skinned folks don’t get a match at all.

−2

MonkeeSage t1_j5wzy9p wrote

What prevents a human security guard from misidentifying a woman or person of color when they are using their own eyes looking at a cctv screen and comparing against a list of people who are not allowed? The AI training data being potentially biased toward being able to better identify white males is based on training data from humans.

−8