Submitted by GlassAmazing4219 t3_10gut8k in Futurology
echohole5 t1_j54xc9m wrote
Reply to comment by khamelean in ChatGPT really surprised me today. by GlassAmazing4219
Nope, it's creating actually new content that makes sense. It's not just copying shit. That joke didn't exist before.
It's is a real intelligence. It's an alien intelligence but it is an intelligence.
barneysfarm t1_j54xmzh wrote
The only way it "creates" new content is through amalgamation of existing knowledge and concepts.
It's not creative nor inspired, even if it may seem that way with limited observation.
feloncholy t1_j54yc8d wrote
Isn't that how humans create new content?
barneysfarm t1_j54yqye wrote
Not always. We have actual nueral pathways that can make novel connections and inspire truly new ideas.
It's rare but there are genesis points of new ideas throughout history.
At this point AI can only be trained on existing data, its not creating novel nueral connections that could result in original thought.
Gagarin1961 t1_j5517eq wrote
> Not always.
But a lot of times, yes? And we call that intelligence.
barneysfarm t1_j551p7l wrote
And? This is artificial intelligence. It's doing its best to replicate the most base level of intelligence, connecting existing ideas together, but it has no existing capabilities that would allow it to think for itself and create truly new concepts, without relying on direction from an actually sentient being.
fiftythreefiftyfive t1_j55337s wrote
“ At this point AI can only be trained on existing data, its not creating novel nueral connections that could result in original thought.”
Ah… no
AI also learns on feedback loop, and randomizes. So - it fosters a sense of what is “good”, based on feedback loop, and can create new things based on that feedback loop.
barneysfarm t1_j5537d0 wrote
It's not creating anything that doesn't already exist. Not at this point.
fiftythreefiftyfive t1_j5554an wrote
It is. Like, you can ask it for essays about extremely obsucure topics that likely no one ever wrote an essay on. Specify a length. Even on abstract topics - (whether some character from a not all too well known show is inherently evil or not). It’ll produce you a coherent answer, mention all the relevant scenes, you can adjust what position you want it to take or how long you want the essay to be etc…
What it’s strongest at currently, is the ability to tie ideas together - for example, scenes from a show and concepts (such as “inherently evil”). Hence why it’s particularly good at essays.
barneysfarm t1_j555el5 wrote
And it all depends on the user, the code, and the data it pulls from to make a response. It's not independently creative or intelligent, it is great at making people believe it is.
fiftythreefiftyfive t1_j556cn0 wrote
It’s not just making trees. That’s part of it, sure, but a big part of it is artificial neural networks (don’t mind the name, I don’t like it either) with feedback loops. You can think of it as a more efficient form of evolution - random modifications in its behavior that leads to changes in outcome, behavior that is then either encouraged or discouraged based on feedback (based on human input and if it’s well made, on self-testing). That’s part of the code. And that type of code is capable of creating new things, new solutions.
barneysfarm t1_j556xjt wrote
I dont disagree with you. The point I was trying to make in reply to the original comment is that it simply cannot be independently creative given that everything in its function depends on the inputs it receives from the user, the data it has to pull from, and sure, an evolving code base.
It's the same reason that yes it can string together existing thoughts from existing data into an essay, but it hasn't produced any novel ideas because it can only pull from existing data.
fiftythreefiftyfive t1_j5594u3 wrote
The point I’m trying to make is that this evolving code part is capable of creativity, or at least a very good imitation of it.
That’s the main thing distinguishing old chess/go bots from the new generation, which has become way, way stronger. The old bots essentially just did depth searches and then evaluated positions based on spoon fed human knowledge. This was a big hurdle for Go bots in particular, because depth searches are extremely computationally difficult with a board that large.
The new generation instead, plays millions of games against itself. It randomly changes its strategies over time. If it wins, it tells itself, “hey I won! Maybe that is worth remembering”, slightly changes it’s code accordingly and continues building from there.
These type of bots are capable of coming up with completely new strategies on their own. Again - not just through search trees, that’s completely infeasible for a game like go - but by modifying its own code incrementally until it knows how to play the game. And similar things can happen here, even to a lesser degree. Go/chess have the advantage of having very clear outlines of what “good” is - if you win the game, good, have your cookies continue just like that, sport. For essays etc… it’s a bit more vague - the best we have is user feedback, and you need some separate intelligent code to generate “feedback” on its own. But in this manner, it does something that is, imo, akin to “creativity”
barneysfarm t1_j55f5ul wrote
It still cannot do so independently. That's my point. It depends entirely on our collective knowledge to do any of that. It is not creative by itself.
fiftythreefiftyfive t1_j55g18k wrote
Neither do humans. People didn't suddenly produce great art work, from the flat medieval art to the quality we saw to the great renaissance art took centuries, generations of arrtists buiilding on each others small innovations. I think your expectations exceed what people are capable of.
barneysfarm t1_j55gbw1 wrote
Except for the fact that you can sit with no stimuli and still end up with outputs from your brain.
ChatGPT is entirely dependent on a creative user if it is going to make a creative output. It will not do so independently, which has been my entire point. It can only be perceived as creative because it relies on creative work and inputs from creative beings.
Queue_Bit t1_j552ekh wrote
This is more "humans are special because we're special" bullshit.
ChatGPT may not be sentient but it is absolutely intelligent.
drewbreeezy t1_j555yp8 wrote
Like a calculator is intelligent…
barneysfarm t1_j552yds wrote
Independently? No.
It's only as intelligent as the user.
splashdust t1_j54yx2h wrote
I mean, that how humans come up with ideas too. That’s not to say that ChatGPT is “creative”, but the way it comes up with answers is not entirely dissimilar to how humans does it. Technically speaking.
barneysfarm t1_j54z659 wrote
It's combing available data and making matches based on prompts and feedback.
The brain can actually make new connections that never existed before. All AI does at this point is spoof the brain, and its believeable enough but clearly not independently intelligent.
splashdust t1_j553nq2 wrote
> It's combing available data and making matches based on prompts and feedback.
Again, essentially what brains does. The brain actually spoofs itself into believing that you where the one who came up with the idea or thought. But actually it’s an autonomic process that happens well before you are aware of the outcome.
barneysfarm t1_j553zp9 wrote
Except the brain can actually derive new ideas independently, whereas this is software that depends upon prompts and rules to return output. It is not independently intelligent by any means, nor creative.
You can make the same argument for most people, myself included. But we are fortunate enough to be able to think outside of a prompt/response format, because we are not bounded by code.
splashdust t1_j5576hl wrote
I’m not disputing that human brains can derive new ideas independently, just saying that they do it in a way similar to large language models.
The human though process constantly loops back on itself, essentially creating its own prompts, and we have the means to evaluate the outcomes and determine it’s value to us. We can also feel something about it, which, of course, a language model can’t.
A tool like ChatGPT is essentially a brain expansion addon. Our brains only have so much capacity for information, and learning new information take a lot of work. Now we can outsource some of that, and we can still evaluate and feel our way to an end result, just as we would when it came from our own brain.
So I would argue that human interaction with ChatGPT still produces a creative outcome. One could argue that it is a less personal one, but depending on the situation that doesn’t necessarily matter.
barneysfarm t1_j557uqn wrote
I agree with you. And I can see the validity of the argument that you can have a creative outcome, primarily because you have a creative being interacting with the tool.
What I was trying to emphasize, in response to the original comment on this thread, is that it is not yet independently creative or intelligent. It relies on our intelligence and creativity. I could have expressed that better.
splashdust t1_j558n5d wrote
Yeah, I know. I got a bit carried away there. These kinds of things are just so much fun to think about! :D
drewbreeezy t1_j555nj8 wrote
lol, right…
Viewing a single comment thread. View all comments