notkevinjohn

notkevinjohn t1_izgkk4r wrote

Actually, I will try and add one more thing to present more constructive criticism:

If you included an example of data being misrepresented by both options, I think you would solve the issue of misleading people into thinking certain plotting practices are intrinsically misleading. So, for instance, if you showed that data can be distorted by truncating a bar graph, but also that data can be distorted by NOT truncating a bar graph, I think you would make a far more valid argument about how to analyze graphical data skeptically.

1

notkevinjohn t1_izgjs3y wrote

Okay, I said what I came here to say. There is nothing special about the examples you selected. If a user encounters, for instance, a bar chart that's been truncated not to start at zero, it's no more likely that this has been done for legitimate reasons than it is that it's been done for illegitimate ones. Similarly, it's just as likely that a bar chart which begins at zero had it's axis selected to mislead about the data as it is that is has it starting at zero to accurately represent the data. Flagging one of those options as potentially misleading is itself a potentially misleading statement.

If you feel like you need to get the last word in here, feel free. I think I've presented the best form of my argument so I am done now.

1

notkevinjohn t1_izgh4hs wrote

Okay, if you don't actually believe that these are practices that are more likely to be used to mislead than to accurately inform, then what is your justification for labeling them as misleading practices?

One of the most common misunderstandings I dealt with when I was doing STEM education with people reading graphs is when the data are presented non-linearly. If you present people with, for instance, a logarithmic graph it's much more likely they will get the wrong impression of the data. But I would never consider log graphs to be misleading. It seems to me like you are doing something analogous here.

0

notkevinjohn t1_izg4zbw wrote

Because that analogy just doesn't map to the situation here. There aren't certain plotting/graphing practices that are more likely to be associated with misleading data then they are with accurate data (except maybe not putting labels on your axis). You are making the assumption that if you see plots that do this, they are more likely to be misleading than accurate, but I don't think the data support that claim. I do everything on this list all the time in my job as an engineer, and I am doing it because it's the most accurate way to answer the questions that my data were collected to answer.

1

notkevinjohn t1_izfwnch wrote

I think the effort here is generally misguided. I don't think you can make a list of fast and easy rules for determining graphs that are intentionally misleading you versus ones that are trying to accurately inform you. There are perfectly valid reasons to do all the things in this list, and you really have to have a deeper understanding of the data and the context to be able to look back and see if something is misleading. It would be like trying to come up with a list of 'misleading phrases' in English and telling people to watch for those red flags, without a deeper knowledge of the conversation and context, that probably wouldn't work.

−1

notkevinjohn t1_ixg4zez wrote

Yeah, I do think I understand the point you are trying to make, but I still don't agree. And that's because the transparency of the process is inextricable from your ability to see if it's working. In order for a legal system to be useful, it needs to be trusted, and you can't trust a system if you can't break open and examine every part of the system as needed. Let me give a concrete example to illustrate.

Take a situation described in the OP where the police are not distributed evenly along some racial lines in a community. Lets say that the police spend 25% more time in the community of racial group A than they do of racial group B. That group is going to assert that there is bias in the algorithm that leads to them being targeted, and if you cannot DEMONSTRATE that not to be the case than you'll have the kind of rejection of policing that we've been seeing throughout the country in the last few years. You won't be able to get people to join the police force, you won't get communities to support the police force, and when that happens it's not going to matter how efficiently you can distribute them.

Just like not crashing might be the metric with which you measure the success of an AI that drives cars; trust would be one of the metrics with which you would measure the success of some kind of AI legal system.

1

notkevinjohn t1_ixex7vp wrote

Yes, and I am pushing back about the spectrum of utility vs transparency that you are suggesting. I think that the usefulness of having a transparent process, especially when it comes to policing, vastly outweighs the usefulness of any opaque process with more predictive power. I think you need to update your definition of usefulness to account for how useful it is to have processes that people can completely understand and therefor trust.

1

notkevinjohn t1_ixebwfn wrote

No it's not, it's game theory. There may be totally valid reasons for doing that thing which might be critical to understand. It's only victim shaming if you start from the assumption that they are doing that thing because they are stupid, or lack self control, or some other undesirable characteristic.

1

notkevinjohn t1_ixe8e3s wrote

I don't necessarily agree that we need to have what you call 'unexplainable AI' and what I would call 'AI using machine learning' to solve the kinds of problems that face police today. I think that you can have systems that are extremely unbiased and extremely transparent that are written in ways that are very explicit and can be understood by pretty much everyone.

But I do agree with you that it's a very biased and incomplete argument to say that automated systems are working in ways that are opaque to the communities they serve and ignore the fact that it's not in any way better to have humans making those completely opaque decisions.

3

notkevinjohn t1_ixe572q wrote

As I've pointed our elsewhere in the thread, I think a lot of people aren't distinguishing between an explicit algorithm, and a machine learning algorithm. I think people in this thread are looking at algorithms as a black box, where you put data in, something incomprehensible happens, and then police go and arrest people. When you have machine learning, it's a non-deterministic process where even the programmer who built the system can't work it backward and say 'this person was arrested because of these inputs to the system.' But there are tons of algorithms that could be developed where the programmer can tell you EXACTLY which inputs lead to a particular result, and the transparency of these algorithms could vastly exceed the transparency of machine learning, and even exceed the transparency of our current human-driven system.

3

notkevinjohn t1_ixe1tfy wrote

Not really, because you can write the algorithm to have as long (or short) a memory as you want it to have. You could even write an algorithm that gives zero weight to all historical crime data and starts by assigning officers randomly throughout the community, and then it continuously updates that distribution of officers based on the crime data starting only from that randomized initial condition. It's basically just wrong to argue that you have to start with an objective data set, you can start with absolute garbage data and the only effect might be that it takes your algorithm a few extra cycles to get past that and converge on a sensible state.

I don't think the OP failed to understand the technology of algorithms at all, and I've been an embedded systems engineer and programmer for 15 years. I think the OP was absolutely right in pointing out that what we're afraid of is that the systems will end up with coverage maps that look too familiar to us, and we won't want to confront that reality. I don't know if that's the case, but I think it's accurate that it's what people fear is the case.

3

notkevinjohn t1_ixdw7mb wrote

Machine Learning, Artificial Intelligence, and Algorithm are all terms that exist in the same space of computer science, but they absolutely do NOT all mean the same thing, and in your post here you used them all interchangeably.

An algorithm is a very generic term for some kind of heuristic that can be followed to produce some result. A recipe for cookies in an algorithm just like some algorithm on Facebook decides what posts to show you. Machine Learning takes place when the process a system implements is non-deterministic; it does things that the programmers didn't explicitly tell it to do; it actually learns how to do new things. An artificial intelligence is a system that's designed to do tasks in the same way a human would, often involving processing visual data or making human-like decisions.

If you wanted to make the case that we shouldn't use MACHINE LEARNING in policing, I would 100% agree with that statement, our police policies should be very deliberate and very transparent and machine learning wouldn't be either of those things. But using this as an argument that we shouldn't be embracing policing with explicitly defined algorithms that are far MORE transparent and deliberate than the humans they would replace is an absolutely indefensible argument. If there's one thing we've learned in the past few years, it's that police need far more regulation, and that's exactly what algorithms do whether they are implemented by a computer or by some system of rules and laws.

1

notkevinjohn t1_ixdq7lc wrote

This was a very poorly structured argument. It basically makes the case that police algorithms are bad because they allow for some of the biases that already exist in our current system to perpetuate, ignoring the fact that the alternative is the system that created those biases in the first place. If police have historically overpoliced some communities, then we have every reason to believe they will continue to do so if we continue with the system of 'police departments make human decisions about how to allocate their resources.' If we switch to the algorithmic model, then continuing that practice is certainly one possible outcome, but it's also entirely possible that we build into that algorithm some coefficient of historical crime that we could let the community have a say in the value of.

Lets say that the 'risk factor' of any given community is based on some collection of metrics like the number of crimes committed in the last 10 years, the number of crimes committed in the last 6 months, the number of 911 calls originating in that community in the last year, and the number of non-criminal emergency calls (fire, ambulance, etc) in that community in the last year:
RF = a1*Crime10y + a2*Crime6m + a3*911Crime + a4*911NonCrime
Now, imagine that through some democratic process the members of that community get to assign values for a1->a4, such that they can place a very low (even zero) value on a1 to completely assuage the concerns of the author in that regard. You simply CANNOT do this if subjective humans are the ones making the decisions.

I simply do not see a non-luddite argument here for why algorithms in policing are a bad thing, as opposed to a neutral thing that have as much propensity to improve policing as they do to make it worse.

5

notkevinjohn t1_iu5upyh wrote

First off, I disagree that his point wasn't to have a more poetic relationship with nature IN LIEU of a more technological one instead of having a a more poetic relationship with nature IN ADDITION to having a more technological relationship with nature.

Second off, even if his point was to merge nature and technology 'poetically' that's an argument that's so subjective as to be useless. What I consider a poetic merger, others wouldn't consider poetic at all. You might as well be arguing that our relationship with nature should just be 'better' because that's as subjectively valid as 'poetic' and also as devoid of specificity.

1

notkevinjohn t1_iu50er3 wrote

That's just completely wrong. The thing that will be helpful in avoiding ecosystem collapse isn't going to be romanticism, it's going to be technology. Take your example of pesticides: we don't spray them because we hate the poetry of nature, we spray them because we need to be able to make sure that the food we're growing is going to be eaten by humans and not insects. The solution isn't to be better in touch with nature, it's to understand the technologies that can prevent the crops from being lost without spraying them with chemicals. It's a classic case of enlightenment values versus romantic values; we're not going to romance our way out of this, we're going to enlighten our way out of this.

1

notkevinjohn t1_iu4ztbc wrote

Nature and technology are not 'opposites.' You are trying to obfuscate with semantics, but my underlying point remains clear. A poetic relationship with nature doesn't allow more people the privilege of getting to be born and getting to live to adulthood; technology does. I don't see how you can argue around that but clearly you're going to keep trying.

1

notkevinjohn t1_itveiq3 wrote

Can you give me any reason to believe that there is ANYONE on Earth who sees things the way you are describing them? Do you see the world that way? Do you know anyone who has told you they see the world that way? Or are you just projecting that world view onto people you disagree with?

1

notkevinjohn t1_ittfho9 wrote

Yes, the amount we have is quite the point. Because if you want to go back to a world where everyone lives hunter gatherer lifestyles where they and a small kin groups control large areas of land in which to hunt and gather and otherwise live an indigenous lifestyle; the population of the world we can support is going to be a fraction of what it is today. How do you propose we get the population back down to hunter-gatherer levels?

5

notkevinjohn t1_itrsj7x wrote

> the idea that we either live on in a separate disconnected realm or cease to be entirely doesn't capture that sense of responsibility one might have to previous generations

The part your missing is that whether or not the idea is TRUE is infinitely more important than how it can be contextualized with respect to some kind of responsibility to past or future generations, and it IS true. We are currently using all kinds of technology to support a population that's many orders of magnitude higher than would be possible if we all lived in hunter-gatherer tribes, or subsistence farming communities.

3

notkevinjohn t1_itqtgn7 wrote

I find arguments like these to be of very little value. Suggesting that the value of a river is more than the hydroelectric energy that can be extracted from it is certainly true, but that's hardly important if you are living in a home without electricity so that people can relate poetically to a river. There seems to be this romanticism about returning to a time where people lived closer to nature; but those types of lifestyles simply aren't sustainable with the population we have now. So we have a choice, we can embrace the fact that it was technology that allows us to support the lifestyle we have now, and that the greatest luxury we have afforded ourselves is each-other; or we can go back to living closer to nature and accept that many billions of people will have to die for us to get there.

11