Submitted by valdanylchuk t3_y9ryrd in MachineLearning

While reading a recent DeepMind paper on an economic game:

https://www.deepmind.com/blog/human-centred-mechanism-design-with-democratic-ai

https://www.nature.com/articles/s41562-022-01383-x.pdf

I encountered this disclaimer:

>"Finally, we emphasize that our results do not imply support for a
form of ‘AI government’, whereby autonomous agents make policy
decisions without human intervention"

It is obvious we want some human oversight. Still, optimizing our societal policies seems to me one of the most promising positive transformations the ML could bring about, much better than a new phone assistant.

There are known promising approaches, for example, to reducing the poverty and inequality. Things like restructuring the social safety nets, labor laws, tax codes, etc. Perhaps ML could help with some of them:

https://talkpoverty.org/2015/06/10/solutions-economic-inequality/

ML research centers want to make an impact in society. For example, Demis Hassabis of DeepMind said he had a list of 1,000 promising scientific problems he wanted to approach with ML, in hope of making a Nobel-grade discovery one day.

Does any ML company, agency, conference, or forum pursue the policy-making applications specifically? When would you estimate we might see major changes in social policies caused by ML? I would bet this does not require AGI in the strong sense, so might be possible relatively soon, if there is political will, funding, and interest. And there should be, as the first country to embrace this accelerated optimization should see some major economic advantages.

12

Comments

You must log in or register to comment.

throwawayP115LG t1_it7pqs3 wrote

Speaking as an American, and I think this is less true in other countries, but:

The problem with enacting good policy is not fundamentally technocratic—I.e the problem is not that we don’t know how to optimize policies. The problem is political—many forces in society don’t want us to have an efficient tax code, or climate policy, etc. Many powerful political actors want to use the power of the state to enrich themselves, punish out groups, and ensure impunity for their cronies.

Academics in Econ and other disciplines churn out optimized policy suggestions all the time—I see this super frequently in climate literature—and the political system has little will to implement them.

AI for optimal policy is, at least in my country, solving the wrong problem.

18

valdanylchuk OP t1_it7qv6d wrote

I understand the political problem. I think it still makes sense to try for better proposals on the technical side. Who knows, maybe some advanced AI can even balance the interests of multiple groups better. Or analyze the legislation history and political press of all the countries throughout human history, and find some correlations in what types of programs work best in combination under certain conditions. Stuff like that.

−1

idrajitsc t1_it7u7c2 wrote

You can say that about any problem, maybe some hypothetical very powerful AI can solve it better than we can. It isn't really a good reason in its own right to pursue something. Is there any real reason AI is well suited to this problem? It's hard to imagine a way to quantify all the important outcomes and encode all the important inputs for something as complicated as real world policy problems.

And some of the political problems don't admit a balance of interests, like in the US some politicians actively run on anti-government platforms because an ineffectual government gives more power to their donors. There's no real way to square that with a government that solves problems, they're diametrically opposed. The other poster is entirely right in that improving current policy proposals is nearly irrelevant to getting good policy implemented.

4

valdanylchuk OP t1_it86vex wrote

I agree my response was hand waving, but GP suggested just giving up any attempts of AI/ML based optimization proposals a priori, which I think is too strict. If we never attack these problems, then we can never win. And different people can try different angles of attack.

2

idrajitsc t1_it88yrd wrote

The thing is that there can be a cost to just giving things a go. Like the work which claims to use facial characteristics to predict personality traits or sexuality, or the recidivism predictors that just launder existing racist practices. There are so many existing examples of marginalized groups getting screwed over in surprising ways by ML algorithms. Then imagine the damage that could be done by society-wide policy proposals, and to hope that you could fully specify a problem that complex well enough to try to control those dangers?

It's not okay to just throw AI at an important problem to see what sticks, you need a well founded reason to believe that AI is capable of solving the problem you're posing and a very thorough analysis of the potential harms and how you're going to mitigate them.

And really, there's absolutely no reason to think that near-term AI has any business addressing this kind of problem. AI doesn't do, and isn't near, the kind of fuzzy, flexible reasoning and synthesized multi-domain expertise needed for this kind of work. The problem with metrics would be an overriding concern here.

5

RobbinDeBank t1_it865zk wrote

I would say the complicated nature of real world policy is why AI will eventually be capable of making better policy than humans. While economists can still produce optimized social and economic policies, they just can’t account for all the 100 different interest groups with different political motives in a real world scenario. AI system can do that due to the computing power they process. I think AI can be a key to making incremental societal progress. Instead of the current situation where oligarchs get all the pie, the AI solution could leave them with a good chunk of the pie while the public can now have a decent chunk too. That’s incremental progress, not ideal, but achievable.

1

idrajitsc t1_it8b8p2 wrote

I mean, economists can account for competing concerns. They have been for centuries. The problem isn't a lack of processing power, it's the fact that those concerns are competing. You have to make subjective decisions which favor some and harm others.

Also you're just kind of asserting that AI will be able to solve problems that there's no reason to believe it will, scaling compute power is not the end all be all of problem solving. What kind of objective/reward function do you think you can write that does even a half decent job of encompassing the impact of social and economic policy on all those different interest groups? Existing AI methods just are not at all amenable to a problem like this.

4

RobbinDeBank t1_it8eogo wrote

Let’s say we have this problem with 100 sides, the public and 99 interest groups. In the ideal world, we want to maximize public good (low unemployment, high economic growth, high income, low financial and social inequality, etc) at all costs and interest groups should not have any more power than the average individual. However, we all know this is not the case in the real world, and all those interest groups have disproportionate amount of political power. Now the problem becomes a constraint optimization problem. We still need to maximize public good, but now we have to take into account the constraints caused by these interest groups. This constraint could be or related to the amount of votes (maybe we need 51% of votes, maybe we need more like 60% or 66%). So that’s our main constraint that must be met: partially satisfy the interest groups just enough to achieve majority votes to pass the policy. This is essentially a trade off, sacrificing a part of the ideally optimized public good to gain enough votes to get the policy passed. This constraint then has to be broken down even further to account of each of the 99 interest groups. Together this is a huge and complex constraint optimization problem. The solution we get could be sth like “let’s give in to most of the demands of 90 groups, fuck the other 9, now we have enough votes and the public will benefit a whole lot from this.”

That is a rough idea from me without expert domain knowledge. With the funding of major AI labs like DeepMind and the expertise knowledge they can have, the problem can definitely be solved, in a real world case. Human economists can only write a solution to a smaller problem within 1 industry for example, and not a solution to this complex a problem.

0

idrajitsc t1_it8iuw2 wrote

It is absolutely not true that the problem can "definitely be solved." You have no grounds to make such a ridiculously confident statement about such a complicated problem. AI is not magic which can solve any sort of problem you show if you just sacrifice enough GPUs to the ML god.

The notion of constrained optimization is not exactly new, that isn't the hard part. And while solving a constrained multi objective optimization problem is generally gonna be np-hard, if it even has a well-defined solution, even that isn't actually the hard part.

The problem is figuring out what the inputs and measured outcomes should even be and then getting them into a form that an AI can actually process. I was not asking you to tell me that the objective function would be an optimization problem; that's what they all are. I was asking you what the actual objective and actual constraints are. Because there is no way that you can possibly summarize every important impact of an economic policy in an objective function, much less doing so while differentiating it across different interest groups. Nor could you actually encode all of the input information which might be relevant.

And then what would you even train on if you could accomplish that already impossible task? It's not like we have a large or terrible diverse set of worked examples of fully characterized policies and outcomes. And if you wanted to take a more unsupervised route then it basically amounts to accurately simulating an economy, which in itself would be worth all the nobel prizes.

6

Hydreigon92 t1_it8qycv wrote

Stanford has a Computational Policy Lab that focuses on using ML and data to measure the impact of policy changes. Carnegie Mellon has a joint PhD program in Machine Learning and Public Policy. There's also a research conference ACM EAAMO about combing algorithmic theory, economics, and public policy together.

My personal research interests in combining social work with machine learning to design better social work interventions, and there has been work in this space to use NLP to intervene on gang shootings and optimizing homelessness services for youth in LA.

4

valdanylchuk OP t1_it8w3iz wrote

Thank you for the excellent pointers! Exactly the kind I was hoping to find. I have too little background to even ask Google effectively. Maybe I can follow some of the articles there.

2

rehrev t1_it72huq wrote

Policy making ai is the only possible way to catastrophe by AI if you ask me.

2

valdanylchuk OP t1_it752xr wrote

No, not the only one. There is also risk of weponization, resource competition, all sorts of misunderstandings... Sometimes it feels like risks of AI are better researched than the benefits.

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence

However, there are risks with any technology, starting with fire and metal working, and they are just something to guard against, not something to stop us from using the technology to our advantage.

In case of policy making, obviously making AI our God Emperor is not the first step we would jump at. It is about finding some correlations and balancing some equations.

1

WikiSummarizerBot t1_it75484 wrote

Existential risk from artificial general intelligence

>Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1