Submitted by izumi3682 t3_xxelcu in Futurology
Leanardoe t1_irbtatt wrote
Interesting that AI is being restricted to minimized the effects of discrimination on minority, rather than seeking to fix the root problem of that discrimination. RIP progress.
whatTheBumfuck t1_irbv66a wrote
Uh well AI is already here, and systemic racism/discrimination isn't going to be fixed overnight soooo.... In fact it seems to amplify it in some disturbing ways... which is why this bill is needed...
Leanardoe t1_irbwk43 wrote
Yeah AI is more advanced because it can compute millions of times a second. So it makes sense it would amplify society in more ways than good. But I feel restricting AI will simply hinder advancement in ways to fix this issue. I’d rather see implementation of ways to fix systematic racism.
hamsandwiches2015 t1_irbxwkv wrote
I don’t think restricting AI will hinder racial progress. It’s really on humans to deal with systemic racism because it’s too complex of an issue for AI to deal with.
SnapcasterWizard t1_irco1c4 wrote
AI is going to be solving problems more complex than we can, that's kind of the whole point of it. So why is this domain different?
hamsandwiches2015 t1_ircwatz wrote
Cause AI can’t affect human behavior. People are still going to be asshole in ways that affect race, sex, gender, age or religion. So that’s on society to change it not AI
Leanardoe t1_irby9u3 wrote
No I don’t mean it will hinder racial progress. I meant development of more advanced AI. Since you mention it I don’t see limiting AI helping the issue though, as advanced AI could be used in ways to assist combating misinformation.
whatTheBumfuck t1_irc43v3 wrote
Generally speaking it's better to do something slowly at a more controlled pace if you intend to do it safely. The thing with AGI is you can really only fuck it up once, then the next day your civilization has been turned into a paper clip factory. In the long run things like this are going to make positive outcomes more likely.
Leanardoe t1_irchxfz wrote
I see your point, I just think placing roadblocks now is premature. If we get to the point AI is starting to tread the line of independent thought, that’s when I think limits and guidelines need to be made. In case of the unlikely terminator event everyone fears lol,
ssjx7squall t1_irbvwva wrote
I mean they’ve shown ai can be horrifically racist in the past
Leanardoe t1_irbwdf8 wrote
I didn’t disagree. It’s society that’s the issue.
NEXUS_6_LEON t1_irbyoj5 wrote
Im a bit confused. How exactly would AI be used that is racist or exploit minorities? Not doubting its real but maybe if the article gave some concrete examples vs abstractions it would be more clear.
Leanardoe t1_irbzjds wrote
Look up Google lambda, they tested it with crowd sourced data and it always turned racist in conversation. Now they only use carefully vetted sources for it's database. Same with celverbot, when it was in it's prime it was very racist.
I found an article discussing the Google Engineer's opinion, it's not a source from Google, but they likely buried that. The clever bot incidents are widely reported on youtube. https://www.businessinsider.com/google-engineer-blake-lemoine-ai-ethics-lamda-racist-2022-7
[deleted] t1_irc1nus wrote
[deleted]
CptRabbitFace t1_irc64tr wrote
For one example, people have suggested using AI in court sentencing in an attempt to remove judicial bias. However, AI trained on biased data sets tend to recreate those biases. It sounds like this sort of problem is what this bill of rights is meant to address.
[deleted] t1_ird5caa wrote
[deleted]
Leanardoe t1_irfc9wf wrote
Welcome to the 21st century, where phytoplankton is dying out and micro plastics are slowly being absorbable into our bloodstream.
Leanardoe t1_irci520 wrote
It would be nice if it worked that way. Legislation requirements and how companies react to implement said legislation requirements tends to differ more than one may expect.
softnmushy t1_irfc8gx wrote
There's not exactly a clear way to "fix" racism.
caustic_kiwi t1_irbzi8l wrote
I haven't thoroughly read the document (I sincerely doubt you have either) but I saw nothing at all in the vein of "restricting progress".
AI trained on biased data, for example, will turn out racist because that's what it was given to learn from. Codifying into law the need to avoid outcomes like that doesn't mitigate progress, it forces us to improve AI technology and... you know... make progress.
Leanardoe t1_irbzvmq wrote
Restricting anything by way of law, is inherently a restriction... Have you ever worked in any form of developmental workflow? Now AI devs have to jump through hoops before pushing their changes. If you haven't then there's no need for the condescending remarks.
IxI_DUCK_IxI t1_irc17by wrote
Made the Agile process worse? Impossible! Scrum Leader! Fix this!
[deleted] t1_irc1jr2 wrote
[removed]
Viewing a single comment thread. View all comments