mutantbeings t1_ja5eflp wrote
Reply to comment by TheRidgeAndTheLadder in Likelihood of OpenAI moderation flagging a sentence containing negative adjectives about a demographic as 'Hateful'. by grungabunga
Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists. This is a very well known phenomenon in software dev; that diverse teams build better software on the first pass due to more varied embedded lived experience. Trust me I’ve been doing this 20 years and see it all the time as a consultant, for better or worse.
TheRidgeAndTheLadder t1_ja5v71q wrote
>Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists.
I'm a lil confused, are you saying that culturally diverse data (CDD) will/can be free of the biases we are trying to avoid?
mutantbeings t1_ja65i06 wrote
No, but if you have 5 identical people with the same biases, obviously those biases and assumptions will show up very strongly. Add even one person and the areas where blind spots exist no longer overlap perfectly. Add one more .. it decreases even more, and so on.
But there’s never a way to eradicate it in full. All you can do is minimise it by bringing broad experience.
TheRidgeAndTheLadder t1_ja6646o wrote
Is that really all we can do?
mutantbeings t1_ja67lay wrote
It’s the best thing you can do to get it as close as possible on the first pass, yeah.
But software is iterative and a collaborative process; generally any change to software goes through multiple approval steps; first from your team, then gets sent out to testers who may or may not be external, often those testers are chosen specifically for their lived experience and expertise serving a specific audience, who may themselves be quite diverse. Eg accessibility testing to serve people living with disabilities. Content testing is also common when you need to serve, say, migrant communities that don’t speak English at home.
Those reviews come back and you have to make iterative changes. That process is dramatically more expensive if you get it badly wrong on the first pass; you might even have to get it reviewed multiple times.
Basically, having a diverse team that embeds that experience + expertise within your team lowers costs and speeds up development because you then need to make less changes.
On expertise vs experience: you can always train someone to be sensitive to the experience of others but it’s a long process that takes decades. I am one of these “experts” and I would never claim to have anything like the intimate knowledge of the people I am tasked with supporting as someone who actually lives it; there’s no replacement for that kind of experience by default.
Ultimately you will never get any of this perfect so you do what you can to get it right without wasting a lot of money; and I guarantee you non diverse teams are wasting a tonne of money in testing. I see it a lot. When I was working as a consultant it was comically bad at MOST places I went because they had male dominated teams where they all stubbornly thought they knew it all … zero self awareness or ability to reflect honestly in teams like that was unfortunately stereotypically bad
Viewing a single comment thread. View all comments