Viewing a single comment thread. View all comments

Fake_William_Shatner t1_j77mj24 wrote

The bigger problem is you not understanding AI or how bias happens. If you did, the point NoteIndividual was making would be a lot more obvious.

There is not just one type of "AI" -- for the most part it's a collection of algorithms. Not only is the type of data you put in important -- even the order can change the results, because it doesn't "Train on all the data all at once" -- so, one method is to randomly sample the data over and over again as the AI "learns." Or, better to say the algorithm with neural nets and Gaussian functions abstracts the data.

Very easy to say "in an area where we've arrested people, the family members of convicts and their neighborhoods are more likely to commit crime." What do you do once you know this information? Arrest everyone or give them financial support? Or set up after school programs to keep kids occupied doing interesting things until their parents get home from work? There is nothing wrong with BIAS if the data is biased -- the problem comes from what you do with it and how you frame it.

There are systems that are used to determine probability. So if someone has symptom like a cough, that are the chances they have the flu. Statistics can be complied for every symptom and the probability of the cause can be determined. Each new data point like body temperature, can increase or decrease the result. The more data over more people over more time the more predictive the model will be. If you are prescribing medicine, than an expert system can match the most likely treatment with a series of questions.

We need to compile data on "what works to help" in any given situation. The police department is a hammer and they only work on nails.

0

I_ONLY_PLAY_4C_LOAM t1_j77vilc wrote

This is the second time a redditor has accused me of not understanding technology when I disagree with them about a point regarding AI in a day. I love seeing people condescend to me about technology that I have years of experience working with in academic and professional settings.

"The data says black people commit more crime" is still not a reason to build automated systems that treat them differently. Biased models are not a good reason to abandon the constitutional and civic principles this country was founded on.

1

Fake_William_Shatner t1_j78i4oh wrote

>"The data says black people commit more crime" is still not a reason to build automated systems that treat them differently.

I agree with that.

However it sounded like your blanket statement about what it does and doesn't do is like saying; "don't use a computer!" Because someone used them wrong one time.

My entire point is it's about the data they choose to measure and what their goals are. Fighting "pre-crime" is the wrong use for it. But, identifying if people are at risk and sending them help? -- I think that would be great.

1