Submitted by mycatisanorange t3_10jz8tt in news
mrcolon96 t1_j5nkv44 wrote
The future is here and it's kinda unnerving. Maybe I'm just paranoid but this sets a precedent and I absolutely hate the implications or possibilities of this becoming some sort of standard.
ADAIRP1983 t1_j5onyta wrote
We seemed to have back ourselves into a corner with the idea that all mistakes are bad. In the pursuit of irradiating every error we will become completely redundant.
[deleted] t1_j5nyzh1 wrote
[removed]
[deleted] t1_j5p353u wrote
[removed]
SadGuitarPlayer t1_j5nl7oi wrote
Fair enough, but I don't care for humans much myself, so let's just let the ai overlords take over already
aradraugfea t1_j5ou160 wrote
All the AI we’re building right now have proven, over and over again, to either have our biases built into them OR to have the ability to learn them quickly.
Facial recognition, already treated as this perfect, flawless tool cannot tell dark skinned people apart. It’s success rate as advertised is based on how well it could tell the Post-Grads working on the thing apart. Our systematic issues of generational poverty feed into systematic issues of education gaps, feeding back into the poverty, and NOW we’re feeding them into law enforcement AI that supposedly takes the human biases out of the equation, until it turns out that it LITERALLY cannot tell black people apart. Yeah, it’s not like the AI chose to have that flaw, it’s not actively dismissing anyone darker than khaki as “eh, you all look alike” but the failure of the developers to even consider if it worked on non-white faces lead to this.
Long story short, we are nowhere close to being able to build the AI that will remove our flaws from the equation.
Viewing a single comment thread. View all comments