Viewing a single comment thread. View all comments

Fake_William_Shatner t1_j7495lo wrote

AI Lawyer might not know their client is black or dropped out of high school for instance. And, nobody went to the AI Lawyer's fraternity so THAT goes right out the window.

31

I_ONLY_PLAY_4C_LOAM t1_j74g5mn wrote

Au contraire, there's already ai being used in the criminal justice system, and it's incredibly biased.

35

ThMogget t1_j74loqm wrote

Which means we can easily measure and remove it. Good luck doing that with humans.

7

TrailHazer t1_j7585ix wrote

“three arrests for a Black person could indicate the same level of risk as, say, two arrests for a white person.” - and this is a solution provided in the article. The whole argument is the same as in weapons of math destruction. Can’t use data like zip code to determine the best policing Strat for the zip code. Beyond dumb and falls apart talking to anyone outside feel good about yourself land for white liberals.

−4

NoteIndividual2431 t1_j74kiam wrote

That is one interpretation of the data, but not the only one possible.

Suffice it to say that AI can't actually be biased in and of itself, but it ends up adopting whatever biases exist in its training data.

−8

I_ONLY_PLAY_4C_LOAM t1_j74n6it wrote

I have no idea what point you are trying to make here. If an AI adopts the bias of its training data then it's biased lol.

10

Fake_William_Shatner t1_j77mj24 wrote

The bigger problem is you not understanding AI or how bias happens. If you did, the point NoteIndividual was making would be a lot more obvious.

There is not just one type of "AI" -- for the most part it's a collection of algorithms. Not only is the type of data you put in important -- even the order can change the results, because it doesn't "Train on all the data all at once" -- so, one method is to randomly sample the data over and over again as the AI "learns." Or, better to say the algorithm with neural nets and Gaussian functions abstracts the data.

Very easy to say "in an area where we've arrested people, the family members of convicts and their neighborhoods are more likely to commit crime." What do you do once you know this information? Arrest everyone or give them financial support? Or set up after school programs to keep kids occupied doing interesting things until their parents get home from work? There is nothing wrong with BIAS if the data is biased -- the problem comes from what you do with it and how you frame it.

There are systems that are used to determine probability. So if someone has symptom like a cough, that are the chances they have the flu. Statistics can be complied for every symptom and the probability of the cause can be determined. Each new data point like body temperature, can increase or decrease the result. The more data over more people over more time the more predictive the model will be. If you are prescribing medicine, than an expert system can match the most likely treatment with a series of questions.

We need to compile data on "what works to help" in any given situation. The police department is a hammer and they only work on nails.

0

I_ONLY_PLAY_4C_LOAM t1_j77vilc wrote

This is the second time a redditor has accused me of not understanding technology when I disagree with them about a point regarding AI in a day. I love seeing people condescend to me about technology that I have years of experience working with in academic and professional settings.

"The data says black people commit more crime" is still not a reason to build automated systems that treat them differently. Biased models are not a good reason to abandon the constitutional and civic principles this country was founded on.

1

Fake_William_Shatner t1_j78i4oh wrote

>"The data says black people commit more crime" is still not a reason to build automated systems that treat them differently.

I agree with that.

However it sounded like your blanket statement about what it does and doesn't do is like saying; "don't use a computer!" Because someone used them wrong one time.

My entire point is it's about the data they choose to measure and what their goals are. Fighting "pre-crime" is the wrong use for it. But, identifying if people are at risk and sending them help? -- I think that would be great.

1

KickBassColonyDrop t1_j77s2o8 wrote

If the AI is unaware of the race of the client, doesn't that mean it's actually impartial? Because it's simply treating the being as a client/human and not introducing any bias?

2

Fake_William_Shatner t1_j78jorg wrote

I was just kidding. However, if you were giving someone legal advice about going to trial -- it makes a difference in venue and jury selection.

I'm not exactly sure on the stat but I thought it was around 2X more time given to black kids than white kids on punishments because the Judges tend to treat them as older.

And I'm sure you'd want statistics on outcome -- just to know what your chances of winning versus pleading would be. And would an AI ask to appeal the case for another venue to find a jury of peers?

The human factor is important but, it would be nice to be more impartial.

1