3SquirrelsinaCoat t1_jaer6kr wrote
Reply to comment by rigidcumsock in Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
I know exactly what it is. And I chose my words intentionally.
rigidcumsock t1_jaerpb9 wrote
> The autonomy of the thought and a real desire to exist (not a pretend one like what is farted out by the Puppet Known as ChatGPT)
Then why are you claiming that ChatGPT pretends to have “autonomy of thought” or a “real desire to exist”? It’s just categorically incorrect.
3SquirrelsinaCoat t1_jaetk90 wrote
There have been plenty of demonstrations of that tool being steered into phrasing that is uniquely human. The NY Mag reporter or someone like that duped it into talking relentlessly about how it loved the reporter. Other examples are plentiful, ascribing a sense of self before the user because the user does not understand what they are using, for the most part.
There is a shared sentiment I've seen in the public dialogue, perhaps most famously by that google guy who was fired for saying he believed a generative chat tool was conscious (that was almost certainly chatgpt) - a narrative that something like chatgpt is on the verge of agi, or at least a direct path toward it. And while a data scientists or architects or whatever may look at it and think, yeah I can kind of see that if it becomes persistent and tailored, that's a kind of agi. The rest of the world thinks terminator, hal, whatever the fuck fiction. And because chatgpt has this tendency toward humanizing its outputs (which isn't its fault, that's the data it was trained on), there is an implied intellect and existence that the non-technical public perceives as real, and it's not real. It's a byproduct, a fart if you will, that results from other functions that are on their own valuable.
rigidcumsock t1_jaeu0ye wrote
You’re waaaaay off base. Of course I can tell it to say anything— that’s what it does.
But if you ask it what it likes or how it feels etc it straight up tells you it doesn’t work like that.
It’s simply a language model tool and it will spell that out for you. I’m laughing so hard that you think it pretends to have any “sense of self” lmao
3SquirrelsinaCoat t1_jaeurbg wrote
>Of course I can tell it to say anything— that’s what it does.
No that's not what it does. I'm leaving this. I thought you had an understanding of things.
rigidcumsock t1_jaeuwep wrote
I’m not the one claiming a language model AI pretends to have a sense of self or desire to exist, but sure. See yourself out of the convo lol
Viewing a single comment thread. View all comments