Submitted by ADefiniteDescription t3_z1wim3 in philosophy
TheRoadsMustRoll t1_ixdz3q1 wrote
Reply to comment by FaustusC in The Ethics of Policing Algorithms by ADefiniteDescription
>Neighborhood A. Neighborhood B.
>
>A has minimal police patrols, minimal police calls, minimal interactions with law enforcement.
>
>B has regular patrols, regular calls and frequent interactions with law enforcement.
correction: if you are using algorithms all you can say is "Neighborhood A had minimal police patrols..." because you are always looking into the past.
in the past there were no algorithms. so you start the historical data set where? in the 1940's? 50's? 60's? those were racist days. so were the 80's, 90's and 2000's.
if you don't start with an objective data set then your algorithms will be biased. and with backward-looking algorithms you won't know that a neighborhood profile has changed until its recorded stats are significantly different. in the meantime you'll be letting crimes go unaddressed.
your particularly unsophisticated approach to a very sophisticated technology (which you fail to understand) is at the heart of this issue.
notkevinjohn t1_ixe1tfy wrote
Not really, because you can write the algorithm to have as long (or short) a memory as you want it to have. You could even write an algorithm that gives zero weight to all historical crime data and starts by assigning officers randomly throughout the community, and then it continuously updates that distribution of officers based on the crime data starting only from that randomized initial condition. It's basically just wrong to argue that you have to start with an objective data set, you can start with absolute garbage data and the only effect might be that it takes your algorithm a few extra cycles to get past that and converge on a sensible state.
I don't think the OP failed to understand the technology of algorithms at all, and I've been an embedded systems engineer and programmer for 15 years. I think the OP was absolutely right in pointing out that what we're afraid of is that the systems will end up with coverage maps that look too familiar to us, and we won't want to confront that reality. I don't know if that's the case, but I think it's accurate that it's what people fear is the case.
FaustusC t1_ixe3oaj wrote
100%, spot on.
People are acting like this AI would only speculate off that past history and not constantly update the model.
You could literally feed in historical data that says there's only crime in neighborhood A despite the opposite being true and the AI would correct the issue within a few cycles as you said. The big thing here is these prediction models learn and they only learn off of input. If everything but the location & type of crime was scrubbed from the data, literally no demographic information at all, the results would come out the same.
I think even philosophically we're at a point where we can't even discuss that the data might just be data without people crying foul and it disgusts me. Racism by low expectations is still racism. I grew up in a very, very shitty neighborhood B. I've also lived in Neighborhood As. I can't say A was completely without incident, but comparing the two even off of my anecdotal experiences is night and Day.
I think the biggest incident in A was someone complaining about Horse droppings on the beach and some teens setting a dumpster on fire.
B had someone get shot. Completely anecdotal but still relevant.
notkevinjohn t1_ixe572q wrote
As I've pointed our elsewhere in the thread, I think a lot of people aren't distinguishing between an explicit algorithm, and a machine learning algorithm. I think people in this thread are looking at algorithms as a black box, where you put data in, something incomprehensible happens, and then police go and arrest people. When you have machine learning, it's a non-deterministic process where even the programmer who built the system can't work it backward and say 'this person was arrested because of these inputs to the system.' But there are tons of algorithms that could be developed where the programmer can tell you EXACTLY which inputs lead to a particular result, and the transparency of these algorithms could vastly exceed the transparency of machine learning, and even exceed the transparency of our current human-driven system.
FaustusC t1_ixe5mvt wrote
Tbh, I don't think most of the people even vaguely understand the difference but are thrilled at the opportunity to morally grandstand against a supposed injustice.
FaustusC t1_ixdzzua wrote
Assuming data itself is biased is the heart of this issue and why people shouldn't be allowed to handle it at all.
Claiming "that era was racist" so all data must be discarded is a cop out and ignores the issues.
Data is nothing but points. Acting like Middle class, Median income A and Lower class, low income B will have similar or equal crime rates is insanity and racism. Pretending like A has the same amount of crime, they're just not patrolled is ignorant at best, racist at worst.
Viewing a single comment thread. View all comments