Viewing a single comment thread. View all comments

RonPMexico t1_iu10v69 wrote

Who said anything about illegal? Is it illegal to put up up for sale signs in a white neighborhood? Is it illegal to claim the earth is 6000 years old from a pulpit?

If something is illegal it is illegal but that's not really useful or meaningful to this discussion.

1

myspicename t1_iu1ok0e wrote

So you think it's ok to exclusively advertise properties to white people?

−1

Tall-Log-1955 t1_iu1u7p6 wrote

I think that if the reason is that non-white people did not engage with the ad (because they are not interested in it), then yes it is okay

If the reason is that some property developer wants to keep out non-whites then it is not okay.

1

myspicename t1_iu1zh5f wrote

If the algo doesn't advertise to non white people, how would we know the problem is engagement. I'm trying to lead y'all through a line of logic that ends with the idea that outsourcing racist activities to an algo isn't not racist.

1

Tall-Log-1955 t1_iu26rpu wrote

These algorithms don't have that problem because they show ads to everyone in small amounts. Then whichever demographic/group engages at the highest rate, they show to more people like that.

You can read how they work here:

https://en.m.wikipedia.org/wiki/Multi-armed_bandit

1

myspicename t1_iu26zou wrote

Ok so generalizing from a small sample size and then using race or race proxy demographics. Do y'all seriously not see the issue?

1

Tall-Log-1955 t1_iu2eaz5 wrote

If race is actually predictive of interest in a product, I don't think it's that bad. Is it sexist to less often show tampon ads to men? Is it ageist to less often show ads for toys to senior citizens?

If people of a given race are genuinely not interested in a product, I don't think it harms them to show them the ads less often.

1

myspicename t1_iu2egdv wrote

I think for many consumer products, like hair care or tampons, this is true. It becomes insidious when it's real estate, education, accommodations, etc if unchecked.

1

RonPMexico t1_iu1xhnz wrote

The only way the algorithm would exclusively advertise to whites would be if that was an explicit direction given to the system. If you program the model to sell advertise at the highest price point and the ads were sent to high income earners in the school district who have searched for realtors and any other numbers of relevant variables then the results were mostly whites I'd have absolutely no problem with it.

1

myspicename t1_iu1z7ms wrote

So you are ok with a system being racist so long as it doesn't explicitly call it out. There was a reason that advertising directed at only one race for a property was made illegal.

1

RonPMexico t1_iu1zro2 wrote

The thing is the model isn't racist. I am explicitly saying including race in these systems should be prohibited.

1

myspicename t1_iu20n0k wrote

This is like when politicians carve up districts based on other factors to proxy race. The model is definitionally racist if it continues to fuel racial segregation.

1

RonPMexico t1_iu21nq4 wrote

Politicians are optimizing for political affiliation and use race as a proxy for that. I am saying that is the exact opposite of what ought to be allowed.

1

myspicename t1_iu24bff wrote

Racism is ok if you find a proxy. Got it.

1

RonPMexico t1_iu25qoq wrote

Have you considered the opposite case? Using the real estate example. You have x number of variables including salary, school district, visiting real estate websites, and so on. Each on of those variables is given a weight by the system. We don't know what those weights are, the systems operates in a "black box" to determine the appropriate values. You look at the results and decide native Americans are under represented. now you have to add native American as a variable and in order to get the results you want you have to decide how much that should impact the final results. So who decides to favor native Americans by how much? Would that not be illegal under the fair housing act?

1

myspicename t1_iu27918 wrote

If companies ever backchecked their algos for mistakes or systematic bias, I might not be against it.

1

RonPMexico t1_iu27fim wrote

I don't know what that sentence means

1

myspicename t1_iu27s7l wrote

Is the concept of machine learning making a racist assumption and enforcing racism alien to you? It's pretty widely discussed.

1

RonPMexico t1_iu28ega wrote

I know. Thats what we are discussing. You take the view if an algorithm returns results that are not directly proportional to racial demographics the system is racist. I'm saying that is ridiculous.

What doesn't convey meaning is:

If companies ever backchecked their algos for mistakes or systematic bias, I might not be against it.

0

myspicename t1_iu28ibd wrote

Did I say directly proportional? Stop strawmanning my argument.

1

RonPMexico t1_iu28p2s wrote

How far from proportional would be okay and after that it's racist?

0

myspicename t1_iu296ce wrote

Clearly there's no strict line. Just like a white passing black person crossing the color line in Jim Crow, racist systems aren't absolute.

I'd say if there's a vastly disproportionate discrepancy it's worth checking. And I'd say if it's around things like housing, or education (rather than say, hair care items) it's more salient.

1

RonPMexico t1_iu29jmi wrote

How about this? We remove race from the equation entirely. Surely that would lead to the best outcome no?

0

myspicename t1_iu2a7pt wrote

Absolutely not and I think it's fairly obvious it wouldn't. This was tried for education and housing and because of historical inequity and cultural in group bias of systems for a majority it doesn't work.

Even workplace or academic institutions that just have policies that appeal to white majorities can enforce that. It's trivial, but even not having say, vegetarian or halal items can be a blocker, and it's "race blind" to be fine not having it.

1

RonPMexico t1_iu2axl8 wrote

So you are saying they can't be race neutral and you can't define when it's racist. Who gets to decide where to draw these arbitrary lines? How would they work with optimized systems? What is fair enough?

−1

myspicename t1_iu2b900 wrote

This is why we have laws around this. Let me guess, you think markets correct all inequities?

1

RonPMexico t1_iu2cfec wrote

I'm saying when you artificially favor one race over another in an otherwise race neutral algorithm to give your desired results it's a bad thing. You believe race should factor into everything. And you have the temerity to claim the moral high ground. Racism is bad and you ought to be ashamed of your views.

0