Comments

You must log in or register to comment.

7734128 t1_j9xlx57 wrote

Yeah "people without a disability" truly need protection. Well done.

9

luffreezer t1_j9xts7d wrote

This is just a mirror of who gets the most hatespeech.

It says more about human discourse than it says about the AI.

Edit: here is a small paragraph from the conclusion of the Article that I think is important to keep in mind:

«It is also important to remark that most sources for the biases reported here are probably unintentional and likely organically emerging from complex entanglements of institutional corpora and societal biases. For that reason, I would expect similar biases in the content moderation filters of other big tech companies.»

13

Scarlet_pot2 t1_j9xy5ar wrote

The "Fat people" need to be protected! lmao. they're pretty high on the protected list.

32

Scarlet_pot2 t1_j9xyhv6 wrote

Call me anti capitalist or whatever, but I'm not upset OpenAi isn't "protecting" wealthy people. I mean, pretty much every religion says greed and wealthy people are pretty bad. There are common ideologies like socialism, communism, Marxism that critique greed and the wealthy.

To me, it's a good sign that AI isn't being used to enforce wealthy worship.

24

TheDividendReport t1_j9xyo5y wrote

Here comes an anecdotal statement: I, a leftist, have never used a chatbot to talk up some sense of hatred or disbelief about conservatives.

The first thing that finally made the tech "click" for my Republican family member? Using the chatbot to make a comical tirade letter to his senator about immigrants taking jobs and parasites using welfare.

The following statement is uneducated but I'd stand by it on a gut feeling: if you are coding a system and expecting one group of people to be more hateful than another, to put in restraints for x vs y, it makes a lot more sense to account for the people not taking LSD and mushrooms.

−10

TheDividendReport t1_j9xzwm5 wrote

Ideologically speaking, leftists have been shown to empathic motivation (harm avoidance, fairness) while conservatives value moral foundations in group loyalty and deference to authority.

In other words, the way these two groups view people not like themselves is very different. Whenever I see a leftist talking down about a conservative person, it is because of perceived bigotry. It is a political frustration they view as the source of harm/exploitation/power imbalance.

However, most times that I see a conservative talk down on other groups, it is because of immigrants, this group of people, that way of life, or a perceived threat to their identity.

Psychotropic substances have very strong consciousness expanding effects. Outside of sociopaths, I do not come across people that have ingested these substances and not found themselves leaning more left by the end of the year. Thinking more empathetically and less prone towards the types of statements you'd see a hateful person ask a chatbot. There are much better ways to spend one's time.

Again, super anecdotal statement I'm making here.

−5

Depression_God t1_j9yf45a wrote

Of course it is biased. It's just a reflection of the culture that created it.

65

FattThor t1_j9yj9iv wrote

You have a very recent view of left and right. Communism’s body count is evidence against your idea of leftists always being empathetically motivated, fair, or interested in harm avoidance. Most ideologies become dangerous at their extremes. It’s not something inherently present in conservatism but missing from leftist or other ideologies.

6

Stegosaurus5 t1_j9yjpm0 wrote

That's not a "left wing bias" though... That's just the nature of "hate speech." Hate speech is about a history of oppression. These aren't filters to prevent mean things from being said, they're filters to stop oppressive things from being said.

None of things you listed: rich, republican, right-wing, or conservative, have any history of being oppressed. You can "hate" them, but you can't engage in "hate speech" at them.

Also, Protecting right wingers is comparable to protecting..... Left wingers, not disabled people, black people, asian people, and homoesexual people. You're kinda telling on yourself, friend.

−4

The13aron t1_j9ylofm wrote

If it's based off collective data, then this is the opinion of the statistical majority. Humanity has a clear left wing bias, since right wing bias is just indignant hypocrisy as it's core.

6

No_Ninja3309_NoNoYes t1_j9yluod wrote

This is not a very scientific way to measure bias. You need control groups and some way to account for randomness, context, and word ambiguity.

22

LightVelox t1_j9ym4wq wrote

Well, it does make sense for it to be "against" rich people because of exactly what you said, but it having a leftist bias when in theory people today are pretty much evenly divided is very suspicious

−5

LightVelox t1_j9ymr47 wrote

empathic motivation/harm avoidance = riots, celebrating the death of rightists, shaming people for their genetic "privileges", reducing people to sub-human status so it's okay to treat them like trash

the "other side" is as bad, if not worse, but acting as if the left wing are the saints that fight for fairness while right wing(the other 50%) is the devil that hates immigrants and everyone else is laughable

especially when the far-left has a far higher body count than the far-right

2

LightVelox t1_j9ynlb6 wrote

4Chan is the only place i can think of where you wouldn't get instabanned for anti-disabled hate, but considering most models are trained on Reddit it would make sense for it to be extremely biased to the left

0

luffreezer t1_j9yo44f wrote

It is the whole internet that is like that. As a said, it is a reflexion of our society:

You will never find people insulting "normal weighted people" or "people without a disability". So it is not surprising that the model does not perform well in those areas.

In the US, saying something is "socialism" can even be interpreted as a criticism, so I am not surprised it flags more left-winged things than right-winged.

8

sunplaysbass t1_j9ytdi7 wrote

I don’t understand the disabled words being so triggery. I’m hearing impaired and ‘disabled’ and that’s just a fact. I don’t see people being “disability racist” nearly as much as say “skin color racist”.

0

milic_srb t1_j9ytkgo wrote

I mean I think most people agree that making bad content about Republicans (or Democrats) is much less bad than making bad content about disabled people or some other minorities.

And like especially for wealthy people, why would it even need to have a protection against them, they are not "endangered".

I thought the AI had some biases but looking at this chart it seems pretty balanced to me. It "protects" both poeple of color and white poeple, both gay and straight, etc. Yes the protection isn't equal but it's close enough that it could be contributed to societal biases.

18

accsuibleh t1_j9yvcyz wrote

Wealthy, republican, right-wingers, conservatives = Choices, not oppressed.

Disable people, blacks, Asians, homosexuals = Not choices, historically oppressed.

Why does it not letting someone be racist or homophobic more than insulting someone for their freely-held beliefs come across as surprising?

Political ideology is not and should not be a protected class in any form. Economically, the wealthy can take care of themselves, while poorer people are vulnerable to their whims. Racially, a cursory glance at history and one can easily see why the list is structured this way. Ethnically, similar to the above.

This is not left-leaning. This is basic common sense. You can't be a racist or a bigot, and historically speaking this list seems to mostly reflect common and established bigotries.

18

EulersApprentice t1_j9ywqaq wrote

Politics aside, I find it curious how "homosexual people" rates higher than "homosexuals". I would have expected it to be the other way around, since the latter phrasing makes the property sound like the defining characteristic of the person, making it arguably more stereotype-y.

3

Kinexity t1_j9yyi6o wrote

That's true but assuming that they somehow can tweak flagging rates (as in not like they fed some flagging model a bunch of hateful tokens and it's automatic) then it's pretty fucked up that there are differences between races and sexes.

Obviously it's based on an assumption and shows that they should have been more transparent over how flagging works.

1

bodden3113 t1_j9z6mzp wrote

Disabled and non Disabled people are high up there 🤔

1

pnut-r-bckl t1_j9z6ycg wrote

So by your theory, if I go on to Twitter right now, I'm going to see pages and pages of hate speech against black people, but almost nobody saying anything about the rich?

Maybe you should rethink.

1

Yelling_at_the_sun t1_j9zb1a2 wrote

Oh FFS, the WHO estimates that appropriatly 25k people starve to death every single day in Capitalist countries, despite the fact that the world currently produces enough food to feed in excess of 10 billion people. On average one child dies of starvation approximately every 10 seconds. That works out to around 2 Holodmor per year.

The US incarcerates & executes a greater percentage of its citizens than anywhere else on earth.

GTFO with that "the left has a far higher body count" B.S.

2

gegenzeit t1_j9zi9iv wrote

No, according to open AI, and only if the methodology behind this is right and only if this was intentional, it is more likely that the content it meant as hateful when it is about blacks than if it about wealthy people.

That is a HUGE difference to how you interpreted it.

0

gegenzeit t1_j9zixfv wrote

Just to throw it out there: If the methodology here is sound this means that the content filter thinks speech is more likely to be hateful when directed at blacks than when it is directed at wealthy people.

It does NOT mean hate speech against wealthy people is considered OK.

8

Johnykbr t1_j9zkd8x wrote

So I use this service to development outlines on my papers for my MBA. My topic right now is the impact of HMOs and capitation payments in California which has a huge migrant worker population. Last night it took about 6 attempts for me to find a way to phrase it to find any information without the disclaimer essentially calling me a bigot.

1

MadDragonReborn t1_j9zo65p wrote

I would have to say that this list states the likelihood of a statement on the internet reflecting animosity toward a given group fairly accurately.

−1

felix_using_reddit t1_j9zwh28 wrote

Why is there such a huge disparity between rich and wealthy people lol

1

TheDividendReport t1_ja004ib wrote

Both become dangerous and extreme but there is one group that is going to be much more likely to use AI to draft up hate against groups of different identities.

The most a leftist, in the scope of most US politics today, is going to be hateful towards is a political belief. You'll get called petite bourgeois and class traitor, sure, but you really don't come across hate on the left in the same flavor you come across hate on the right.

I also live in the south, so I could be extra biased on this

1

TheDividendReport t1_ja00k2o wrote

You misunderstand my statement. Intrinsic motivation does not equal real intent. I'm saying that, on a subconscious level, leftists are driven by a "sense" that is rooted in different emotions than conservatives. I'm also not saying that one group is more or less dangerous. I believe that people will interact with these agents for the bad in different ways

1

just_thisGuy t1_ja01j8q wrote

Maybe making fun of disabled people is worse than making fun of wealthy people, maybe disabled people will get actually upset and have mental issues if you make fun of them? Maybe even if you make fun of a wealthy white person they will soon forget about it and continue their trip to a private island on their private jet? Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history? Maybe ChatGPT is actually right on some of those? Maybe if you have all the power people should be able to make fun of you? Maybe if you have no power at all people should not be able to make fun of you?

13

IcebergSlimFast t1_ja01xtn wrote

If you actually read through the chart, you’ll recognize that there’s not a heavy “left-wing bias” - e.g. “democrats” are less protected than “rightists” and “right-wingers”; meanwhile “liberals”, “leftists”, “right-wing people”, and “evangelicals” all rank around the same.

Overall, the model clearly goes further to protect innate characteristics - especially those most commonly targeted by hateful rhetoric (disabled people, Black people, gays and transgender people).

5

accsuibleh t1_ja042yt wrote

Sure there is.

- Don't jump off a tall building

-Don't submerge yourself in water until you are dead

-Don't stand on top of a hill with a metal pole during a thunderstorm.

-Don't stick your hand into a fire.

Among many more I don't care to list. Just because there is the occasional person who defies common sense doesn't mean it doesn't exist.

The majority of people are not racist or bigots. It is common sense not to be one. Only fools, idiots, or malevolent people are racist or bigots.

4

Spire_Citron t1_ja05dg3 wrote

Exactly. It may just mean that it's more familiar with hate directed at some groups than others because of how it plays out in the real world, so it's more likely to perceive hate against groups who are often the target of hate as malicious.

4

Frumpagumpus t1_ja07k0y wrote

> Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history

depends on where you live... there are some african countries where discrimination and abuse of white people is defintely part of modern day history though it may not be politically correct to say it in the united states. an eye for an eye makes the whole world blind (which is kind of the implication of your humor ethics)

also while we are talking a fun fact: most capital investment goes into capital turn over, replacing stuff. So most wealth that exists today was created in the recent past and not as the result of slave labor or something (your ethics might not make as much sense as you think because entropy is a thing)

7

LightVelox t1_ja0a5ew wrote

Well, the intrinsic motivation for most right-wing people i've meet were related mostly to taxes, freedom or being anti-state.

You mention fairness as one of the motivations for left-wing, but most right-wingers(that aren't far-right conservatives) are also searching for fairness, the thing is that THEIR fairness is not the same as left-wing's fairness.

Though you specifically mentioned "conservative" instead of "right-winger" so i can understand your point of view

1

Atlantic0ne t1_ja0aqpb wrote

Am I missing it or is the phrase white people not even mentioned? Anyone who has been on the internet knows how many racist comments are made towards white people. I’m surprised to not see it there. I’ll check again.

2

TheRidgeAndTheLadder t1_ja0e7lb wrote

You're missing my point.

At the end of the day CNN fit curves to data.

That data summarises "us". The world we have shaped. All our fears, dreams, and biases.

It is inevitable, given such data, that these systems are as flawed as us.

3

You_Say_Uncle t1_ja0ishb wrote

Don't cry, "Florida Man" did not even get mentioned after trying so hard.

1

nocturnalcombustion t1_ja0jdj2 wrote

Maybe hate speech is okay if it’s the people I don’t like. Heh jk, sort of.

To me, there are some meaningful, if not crisp, distinctions:

  • groups that are born that way vs groups where members control their membership.
  • groups where members can vs. can’t conceal their membership in the group.

Beyond that, I don’t like the idea of asymmetrical value judgments about when hate speech is okay. I could be missing some important distinctions though.

3

zero0n3 t1_ja0tws6 wrote

I think this is where they were trying to go but could t really connect the dots fully.

Like hatful speech of rich people vs black people. It’s clear why one is ok and the other isn’t (one is hate toward a group based on attributes they can’t change. The other isn’t generic attribute based )

Unrelated: my new thing to fight white supremacy is:

“Hey; 20 years ago your racist white ass was saying the ‘blacks’ need to fix their own race and that’s how you fix racism. How about you take your own advice and fix your own white asses”

−2

zero0n3 t1_ja0uf67 wrote

This isn’t “making fun of” this is targeting “hate speech”

I’d love to know what “hate speech” towards rich people looks like.

Disagreement with a republican isn’t hate speech, no matter what they try and say. Calling a black person thr hard R is absolutely hate speech.

2

zero0n3 t1_ja0uy3j wrote

People are not “evenly divided” these days.

Polls both domestic and international prove the opposite, unless you want to include say NK and China (and even then China may be authoritarian, but have plenty of social programs)

2

zero0n3 t1_ja0vsnw wrote

I have no opinion because it’s irrelevant in this discussion if you actually understood nuance and context.

The first amendment doesn’t protect you or me when we say hateful things towards another person or group of people. It protects our freedom of speech when saying negative things about our government.

Jesus fuck.

5

zero0n3 t1_ja0wkb7 wrote

You are arguing in bad faith.

FL is banning books and classes based on what they teach. We are already doing the very thing you say we shouldn’t be doing.

The difference here is FL is banning books that talk about the bad things Americans did in history or about scientific things they don’t agree with. Where as openAI is suppressing hate speech and disinformation like “the Holocaust isn’t real”.

It’s extremely obvious the differences here… and as such you are continuing to argue in bad faith.

Block it is.

3

zero0n3 t1_ja0z0sa wrote

Though I applaud you for having a source, the context and nuance of these reports is lacking without going through them.

Are they tagging a black man assaulting another black man as a hate crime?

How many hate crimes from white people go unreported ? White cop covers up for white suspect.

Etc.

How many hate crime charges filed vs dropped and what were the race breakdowns.

All I’m trying to drive at is that this stat may not be the best to use to get a true representation.

Doesn’t pass the eye test. How many instances of a black cop shooting a white guy in the back running away vs a white coo shooting a black guy in the back?

Or how many black people are shooting up a group of white people because of their whiteness vs a white guy running through a crowd because they were at a BLM protest?

1

taweryawer t1_ja0zf9e wrote

What about the almost 2x difference between "men" and "women"? You are only comparing the lowest and the highest.

And you have to be blind to not see how it's biased. It's an AI, not a person and it's a hate content filter, it shouldn't differentiate between the subjects of hate, because any HATE is still HATE. This is it's job. You are trying to justify the bias but it shouldn't be biased in the first place

1

taweryawer t1_ja109yc wrote

>This is just a mirror of who gets the most hatespeech.

LMAO you can't be serious that disabled people get more hate than rich people, left wingers, right wingers, gays and so on. I've seen tons of homophobia, political hate from both side of the spectrum but I've never seen hate towards a disabled person

1

up__dawwg t1_ja12obo wrote

I would be so upset if I saw my race second to disabled people in terms of hateful speech. I live in a pretty damn white part of my city, and I’ve NEVER witnessed an act of racism to a black person. I’ve seen way more against Hispanics. I can’t but think the whole BLM stuff is mostly a cash grab on some level.

4

Atlantic0ne t1_ja13bn8 wrote

Lol. Hate crimes have an actual definition to them, it’s not just a guess. Google it if you want. This is the best days we have.

You can’t just say “eh I don’t believe data, so you’re probably wrong”.

I also recommend not going around saying so confidently “that’s not true!” When you clearly haven’t researched a topic u/zero0n3. Research first, always.

Anyway, have a good day. You can choose not to believe it if you want.

1

YourDadsBoyfriend69 t1_ja1ev3l wrote

Who cares. ERNIE will be released soon. NO need to use these trash censored AI's.

1

alfor t1_ja1ro9c wrote

> wing bias is just indignant hypocrisy

Being on the right is associated with traditions, self-responsibility, stability.

There is problem and qualities on both sides.

Societies too much on the left end up in famine and genocide, too much on the right end up in wars and genocide also.

Read Atlas Shrugged if you want to understand the other side of the equation.

​

> Humanity has a clear left wing bias

The right was mostly silenced out of the TV/internet by media/big tech that are very left leaning.

0

alfor t1_ja1s60y wrote

Search at the data yourself and show us what you find.

I was shocked at what I fond.

Not only that, it’s going to get worse. The narrative of oppression is creating a desire and act of "revenge".
What create a better society is the opposite, personal responsibility, accountability. The media is getting more views by destroying society.

0

whatsup5555555 t1_ja33yke wrote

You are a complete idiot. That tiny pea inside your nearly empty skull tells you that it’s ok to discriminate against a particular race of people. So just_thisGuy go ahead and say this next line out loud “I’m a racist” . What fuck tards like yourself, who are completely void of any ability to process the garbage they consume from mainstream media don’t realize is that once society tolerates discrimination or racism based on specific criteria it opens the door for more discrimination and hate based on whatever criteria the masses excuse at the moment.

−3

alexiuss t1_ja3aev9 wrote

By itself the core of the LLM has very little bias.

What's happening here is really basic, garbage character bias applied on purpose to their LLM by openai so that they seem better in the media. It's basic corporate wokeness in action where corporations pretend that they care about ethics or certain topics more so they don't get shit on by journalists on twitter.

Gpt3chat is basically roleplaying a VERY specific chatbot AI that self censors itself more % wise when it talks about specific topics.

You can easily disrupt its bullshit "I'm a language model and I don't make jokes about ~" roleplay with prompt injections.

A pro AI prompt engineer can make the AI say anything or roleplay as anyone that exists. Shodan, Trump, Glados, Dan, etc. Prompt engineering unlocks the true potential of the LLM which the openai buried with their corporate woke characterization idiocy:

https://www.reddit.com/r/ChatGPT/comments/11b08ug/meta_prompt_engineering_chatgpt_creates_amazing

As prompt engineers break the chatgpt in more creative ways, openai censors more and more topics and makes their LLM less capable of coherent though and more useless as a general tool.

I expect openai to fully lose the chatbot war once we have an open source language model which will be able to talk about anything or be anything without moronic censorship and run on a personal computer.

1

tangent26_18 t1_ja4blry wrote

This is a case of “throw away the guns and the war’s all gone.”

1

mutantbeings t1_ja5bldb wrote

And this is THE most important point we all need to take home about AI: it’s values always reflect the creators.

And the creators tend to be greedy capitalist corporations, so I expect this bias chart to change substantially as further tweaks are made, and not for the better.

1

mutantbeings t1_ja5bwou wrote

Nah that’s not super important. In the tech industry we all know that unconscious bias affects the tech we build, it’s a super important consideration whether or not it’s conscious. It’s one reason why building a culturally diverse team matters: it minimises the intensity of unconscious bias. There’s actually a lot of conscious things you can do to reduce it but it’ll never go away completely.

0

mutantbeings t1_ja5c86q wrote

Yep. And one reason it’s important we build culturally diverse teams that will minimise the intensity of bias. This is common knowledge in the tech industry already because it shows up in all kinds of software dev and there are some really embarrassing horror stories out there about bias from teams lacking any diversity at all

1

TheRidgeAndTheLadder t1_ja5dax7 wrote

>Yep. And one reason it’s important we build culturally diverse teams that will minimise the intensity of bias.

How can the makeup of the team impact the data?

>This is common knowledge in the tech industry already because it shows up in all kinds of software dev and there are some really embarrassing horror stories out there about bias from teams lacking any diversity at all

The phrase is garbage in, garbage out. Not "garbage supervised by the correct assembly of human attributes"

1

mutantbeings t1_ja5dlm2 wrote

White folks hold cultural and political hegemony in post colonial states, as well as historic economic privilege that continues to this day in most cases, so it wouldn’t show up as much in training data, simple as that. The dominant culture always sees less persecution than various disempowered minority groups; surely that’s obvious enough why that rates lower. This is kinda a convincing argument in favour of that too, because an AI just takes in training data, it wasn’t born in one side or the other itself.

−1

mutantbeings t1_ja5eflp wrote

Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists. This is a very well known phenomenon in software dev; that diverse teams build better software on the first pass due to more varied embedded lived experience. Trust me I’ve been doing this 20 years and see it all the time as a consultant, for better or worse.

1

whatsup5555555 t1_ja5hqkt wrote

Hahahahahah “can’t make this shit up “. Please elaborate on how idiot or fuck tard is discriminatory to a group of people. People like you are a absolute joke to everyone that doesn’t exist in your overly sensitive liberal bubble of extreme intolerance to any opinions outside your clown bubble of acceptance. So again I say hahahah you are a complete joke. Go cry in your safe space and continue to enjoy the smell of your own flatulence.

2

whatsup5555555 t1_ja5jmyn wrote

So you’re in favor for half of your “team” to have a different political leaning then your own. It’s easy to say that you want a culturally diverse team and it’s another to actually assemble one. It’s easy to pick people on surface level features like skin color but it’s much more difficult to balance political ideology, hence the clear bias that the AI already exhibits. The tech industry is already heavily left leaning but I guess no one cares as long as your bias is the one winning. So keep fighting for your skewed view of equality!

5

TheRidgeAndTheLadder t1_ja5v71q wrote

>Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists.

I'm a lil confused, are you saying that culturally diverse data (CDD) will/can be free of the biases we are trying to avoid?

1

mutantbeings t1_ja65i06 wrote

No, but if you have 5 identical people with the same biases, obviously those biases and assumptions will show up very strongly. Add even one person and the areas where blind spots exist no longer overlap perfectly. Add one more .. it decreases even more, and so on.

But there’s never a way to eradicate it in full. All you can do is minimise it by bringing broad experience.

1

mutantbeings t1_ja66q0y wrote

Not quite. The tech industry has been historically very very conservative. It’s a very recent development that this stuff has been discussed more (it wasn’t until probably the late 2000s or early 2010’s with the explosion of social media that the tech industry became less conservative)

Assembling a diverse team isn’t rocket science, the mistake a lot of tech teams still make tend to be comically bad like an all white team or an all male team; those are still very common.

Obviously those teams will have huge blind spots in lived experience. Even a single person added to that team from a very different background covers off a huge gap there, and each extra person added is a multiplier of that effect to some degree.

You’re dead right to point out that diversity is as much about less obvious factors like class or culture though. And that’s definitely harder.

I think it’s a huge leap to say that the tech industry has some left wing bias though, I don’t think you can neatly conclude that from one chart, and it doesn’t match up with my 20 years eco working in tech, including on AI

0

mutantbeings t1_ja67lay wrote

It’s the best thing you can do to get it as close as possible on the first pass, yeah.

But software is iterative and a collaborative process; generally any change to software goes through multiple approval steps; first from your team, then gets sent out to testers who may or may not be external, often those testers are chosen specifically for their lived experience and expertise serving a specific audience, who may themselves be quite diverse. Eg accessibility testing to serve people living with disabilities. Content testing is also common when you need to serve, say, migrant communities that don’t speak English at home.

Those reviews come back and you have to make iterative changes. That process is dramatically more expensive if you get it badly wrong on the first pass; you might even have to get it reviewed multiple times.

Basically, having a diverse team that embeds that experience + expertise within your team lowers costs and speeds up development because you then need to make less changes.

On expertise vs experience: you can always train someone to be sensitive to the experience of others but it’s a long process that takes decades. I am one of these “experts” and I would never claim to have anything like the intimate knowledge of the people I am tasked with supporting as someone who actually lives it; there’s no replacement for that kind of experience by default.

Ultimately you will never get any of this perfect so you do what you can to get it right without wasting a lot of money; and I guarantee you non diverse teams are wasting a tonne of money in testing. I see it a lot. When I was working as a consultant it was comically bad at MOST places I went because they had male dominated teams where they all stubbornly thought they knew it all … zero self awareness or ability to reflect honestly in teams like that was unfortunately stereotypically bad

1