Submitted by valdanylchuk t3_y9ryrd in MachineLearning
valdanylchuk OP t1_it86vex wrote
Reply to comment by idrajitsc in [D] Do any major ML research groups focus on policy-making applications? by valdanylchuk
I agree my response was hand waving, but GP suggested just giving up any attempts of AI/ML based optimization proposals a priori, which I think is too strict. If we never attack these problems, then we can never win. And different people can try different angles of attack.
idrajitsc t1_it88yrd wrote
The thing is that there can be a cost to just giving things a go. Like the work which claims to use facial characteristics to predict personality traits or sexuality, or the recidivism predictors that just launder existing racist practices. There are so many existing examples of marginalized groups getting screwed over in surprising ways by ML algorithms. Then imagine the damage that could be done by society-wide policy proposals, and to hope that you could fully specify a problem that complex well enough to try to control those dangers?
It's not okay to just throw AI at an important problem to see what sticks, you need a well founded reason to believe that AI is capable of solving the problem you're posing and a very thorough analysis of the potential harms and how you're going to mitigate them.
And really, there's absolutely no reason to think that near-term AI has any business addressing this kind of problem. AI doesn't do, and isn't near, the kind of fuzzy, flexible reasoning and synthesized multi-domain expertise needed for this kind of work. The problem with metrics would be an overriding concern here.
Viewing a single comment thread. View all comments