idrajitsc

idrajitsc t1_iyaq0ha wrote

I mean, just throwing up your hands and saying "sure it's probably nothing, but most things are nothing" is a cop-out: why are you posting it here then?

You're contradicting yourself. If it's nothing more than a random text generator with Plato's mannerisms, why's it interesting and why are you saying it's a tool for approaching philosophical problems? If it has something more profound to say--no it doesn't--and if you insist it does it's incumbent on you to justify it with something more than "it's really big and complex so maybe it's doing something inexplicable."

1

idrajitsc t1_iy9o98b wrote

That paper addresses your first question directly, and better than I can. But in brief, it's nonsense because how could it not be? If there is real, interesting information content to what it's saying, how was it generated? How would you expect your network to have an understanding of anything, use that understanding to synthesize new ideas, and then accurately convey those ideas to you? All it has been trained to do is probabilistically produce coherent text--the training process has no interaction with the information content of the training texts, much less anything that would allow it to generate novel meaning.

As for the rest of your reasoning, you could use the same argument for anything at all that causes you to think about things. In line with that paper, would you want to spend serious intellectual effort on deriving deeper meaning from a parrot's chatter? Maybe the network accidentally outputs something that sends you along a path to productive thoughts. Or maybe you waste all your time trying to turn lead into gold. Like, of course you're free to experiment with it, but it's irresponsible to pretend it's outputting anything profound if you're going to be sharing it with other people.

1

idrajitsc t1_iy8hyiv wrote

That's the thing though, it doesn't explore or generate new ideas. It generates grammatically correct text with a bit of flavor that has no actual meaning--meaning requires an intent to convey information. All of the ideas are things you impose on it. There's none of the weird intuition or perspective a child offers. It's just a random text generator you're using to seed your ideas.

And that'd be... okay I guess? Not particularly efficient and maybe counterproductive since it'll bias you towards thinking about nonsense, but not directly damaging. But even if you didn't intend it, the obvious implication here is that "this is how Plato would answer my question!" Which lends it a credibility it doesn't deserve. You should read this paper and particularly section 5 and its citations.

edit: sorry I meant section 6

1

idrajitsc t1_iy5apu8 wrote

It cannot, in any way. Nothing about the training process suggests that the trained networks can interpolate in "idea space," they just work on language at a superficial level. There's no reason to associate the meaning of anything the network says with the original philosopher unless it's directly parroting them.

This is fine for cutesy fun stuff, but calling it "Speaking with Plato" and pretending that it can contribute to philosophy is very misleading, even though everyone and their mother is doing similar things with LLMs now.

1

idrajitsc t1_ix8lrk0 wrote

I mean, I'm not really sure what your ask is. People do work on RL for NLP. It just doesn't offer any huge advantage, and the reason your intuition doesn't translate to an actual advantage is because writing a reward function that reproduces the human feedback a baby receives is essentially impossible. And not just in a, it's hard but if we put enough work into it we can figure it out, kind of way.

2

idrajitsc t1_ix56883 wrote

I think that's still answered pretty well by their original comment: probability distributions over sequences of words, given sufficient compute and good enough corpora, gets pretty close to the superficial aspects of language. And we can now learn those well with LLMs, so why insist on RL instead?

For actually learning language, in the sense of using it to convey meaningful, appropriate information, which LLMs so far cannot do, maybe it's better to take an RL approach. But I don't know how to write a reward function that encompasses that. So as long as we can't do the superior thing with either approach, we might as well focus on the easier approach to the superficial thing.

2

idrajitsc t1_it8iuw2 wrote

It is absolutely not true that the problem can "definitely be solved." You have no grounds to make such a ridiculously confident statement about such a complicated problem. AI is not magic which can solve any sort of problem you show if you just sacrifice enough GPUs to the ML god.

The notion of constrained optimization is not exactly new, that isn't the hard part. And while solving a constrained multi objective optimization problem is generally gonna be np-hard, if it even has a well-defined solution, even that isn't actually the hard part.

The problem is figuring out what the inputs and measured outcomes should even be and then getting them into a form that an AI can actually process. I was not asking you to tell me that the objective function would be an optimization problem; that's what they all are. I was asking you what the actual objective and actual constraints are. Because there is no way that you can possibly summarize every important impact of an economic policy in an objective function, much less doing so while differentiating it across different interest groups. Nor could you actually encode all of the input information which might be relevant.

And then what would you even train on if you could accomplish that already impossible task? It's not like we have a large or terrible diverse set of worked examples of fully characterized policies and outcomes. And if you wanted to take a more unsupervised route then it basically amounts to accurately simulating an economy, which in itself would be worth all the nobel prizes.

6

idrajitsc t1_it8b8p2 wrote

I mean, economists can account for competing concerns. They have been for centuries. The problem isn't a lack of processing power, it's the fact that those concerns are competing. You have to make subjective decisions which favor some and harm others.

Also you're just kind of asserting that AI will be able to solve problems that there's no reason to believe it will, scaling compute power is not the end all be all of problem solving. What kind of objective/reward function do you think you can write that does even a half decent job of encompassing the impact of social and economic policy on all those different interest groups? Existing AI methods just are not at all amenable to a problem like this.

4

idrajitsc t1_it88yrd wrote

The thing is that there can be a cost to just giving things a go. Like the work which claims to use facial characteristics to predict personality traits or sexuality, or the recidivism predictors that just launder existing racist practices. There are so many existing examples of marginalized groups getting screwed over in surprising ways by ML algorithms. Then imagine the damage that could be done by society-wide policy proposals, and to hope that you could fully specify a problem that complex well enough to try to control those dangers?

It's not okay to just throw AI at an important problem to see what sticks, you need a well founded reason to believe that AI is capable of solving the problem you're posing and a very thorough analysis of the potential harms and how you're going to mitigate them.

And really, there's absolutely no reason to think that near-term AI has any business addressing this kind of problem. AI doesn't do, and isn't near, the kind of fuzzy, flexible reasoning and synthesized multi-domain expertise needed for this kind of work. The problem with metrics would be an overriding concern here.

5

idrajitsc t1_it7u7c2 wrote

You can say that about any problem, maybe some hypothetical very powerful AI can solve it better than we can. It isn't really a good reason in its own right to pursue something. Is there any real reason AI is well suited to this problem? It's hard to imagine a way to quantify all the important outcomes and encode all the important inputs for something as complicated as real world policy problems.

And some of the political problems don't admit a balance of interests, like in the US some politicians actively run on anti-government platforms because an ineffectual government gives more power to their donors. There's no real way to square that with a government that solves problems, they're diametrically opposed. The other poster is entirely right in that improving current policy proposals is nearly irrelevant to getting good policy implemented.

4