Viewing a single comment thread. View all comments

Ingvariuss OP t1_iy7yyw9 wrote

Hi,

Thanks for sharing your thoughts! You do have a point about us needing to be careful and not be deceived by the superficiality of AI.

The name is like that for making it more interesting and it is misleading only to the extent of one's naivety of believing that one can talk directly to a dead person in any way.

As for solving practical problems, I'll need to disagree with you from a cognitive standpoint. AI models like these can be used for fun, but they can also be used to explore new ideas or angles of a philosopher or philosophy that might have evaded us due to the combinatorial explosion of possibilities.

But, we do need to acknowledge that AI, in this example, isn't directly solving a problem but is a tool for a human to explore that problem and spark new ideas or approaches and in turn contribute to advancing our knowledge.

It's a bit similar to those stories where children sparked top scientists to uncover new things and approaches by simply asking a question or giving an idea that our "grown-up" cognitive framing systems didn't want to pass through.

1

idrajitsc t1_iy8hyiv wrote

That's the thing though, it doesn't explore or generate new ideas. It generates grammatically correct text with a bit of flavor that has no actual meaning--meaning requires an intent to convey information. All of the ideas are things you impose on it. There's none of the weird intuition or perspective a child offers. It's just a random text generator you're using to seed your ideas.

And that'd be... okay I guess? Not particularly efficient and maybe counterproductive since it'll bias you towards thinking about nonsense, but not directly damaging. But even if you didn't intend it, the obvious implication here is that "this is how Plato would answer my question!" Which lends it a credibility it doesn't deserve. You should read this paper and particularly section 5 and its citations.

edit: sorry I meant section 6

1

Ingvariuss OP t1_iy9hgqy wrote

Thanks for sharing this paper! I'll give it a read in the coming days. As for making you think about nonsense, who's to say that it is nonsense?

Especially if it manages to give you a worthwhile idea to ponder and be productive about. Even if it's helping some to just "seed" their own ideas, the seed can sprout into something useful and/or thought-provoking. It's psycho-technology in some way. For example, we might say that alchemists and astrologists dealt with "nonsense" but they still managed to lay the foundations for chemistry and astrophysics with the projection of their own inner world to the world of matter.

There must be some reason why human evolution is expanding us toward the world of ideas and imagination as our next frontier that will, based on my humble opinion, certainly expand our understanding of the world and science overall.

Aside from that, the field of AI will just get better over time and who knows what we might be capable of (be it good or bad).

1

idrajitsc t1_iy9o98b wrote

That paper addresses your first question directly, and better than I can. But in brief, it's nonsense because how could it not be? If there is real, interesting information content to what it's saying, how was it generated? How would you expect your network to have an understanding of anything, use that understanding to synthesize new ideas, and then accurately convey those ideas to you? All it has been trained to do is probabilistically produce coherent text--the training process has no interaction with the information content of the training texts, much less anything that would allow it to generate novel meaning.

As for the rest of your reasoning, you could use the same argument for anything at all that causes you to think about things. In line with that paper, would you want to spend serious intellectual effort on deriving deeper meaning from a parrot's chatter? Maybe the network accidentally outputs something that sends you along a path to productive thoughts. Or maybe you waste all your time trying to turn lead into gold. Like, of course you're free to experiment with it, but it's irresponsible to pretend it's outputting anything profound if you're going to be sharing it with other people.

1

Ingvariuss OP t1_iyagl8c wrote

Regarding your first paragraph. It evades what I wrote about it being more of a tool for inspiration and/or being used as a psychotechnology. In other words, it is indeed still up to the human to separate the wheat from the chaff and plant those "seeds" you mentioned earlier.

Regarding the second paragraph, I do believe that comparing it to a parrot is a strawman. Especially for bigger and more advanced language models than the one I used as a proof of concept. As for the probability of it being (un)productive, isn't that the case for many things in life? This is especially true for scientists that have thousands of failed experiments where only one that is successful advances us further. Nonetheless, I would prefer us speaking with each other and bouncing ideas rather than texting with a bot on any day.

As for being (i)responsible, nowhere did I say that it outputs profound things nor would it be intellectually honest to deny it as we are dealing with probabilities that aren't apparent to us. That also informs me that you probably didn't read the full article linked in my post.

1

idrajitsc t1_iyaq0ha wrote

I mean, just throwing up your hands and saying "sure it's probably nothing, but most things are nothing" is a cop-out: why are you posting it here then?

You're contradicting yourself. If it's nothing more than a random text generator with Plato's mannerisms, why's it interesting and why are you saying it's a tool for approaching philosophical problems? If it has something more profound to say--no it doesn't--and if you insist it does it's incumbent on you to justify it with something more than "it's really big and complex so maybe it's doing something inexplicable."

1