Viewing a single comment thread. View all comments

khamelean t1_j54vfo2 wrote

It’s doesn’t understand the connection. It’s just paraphrasing someone else that does.

60

GlitteringAccident31 t1_j54wmcd wrote

I thought so as well and went to check. My Google results show this as the only result to the setup question

21

khamelean t1_j54wt39 wrote

Google is not an exhaustive resource.

−7

curtyshoo t1_j54xs7q wrote

You made the assertion; the burden of proof is on you.

21

jozelino t1_j550dy6 wrote

Or we can think of the original statement as the assertion: "chatGPT thought it out itself".
When you really want to believe something, it's easy to find proof for it.

5

curtyshoo t1_j550o53 wrote

Wrong. Chatgpt uttered the joke. You claimed it was plagiarized. Prove it or simply STFU.

−11

jozelino t1_j550y1d wrote

You seem angry, almost like somebody insulted your god.
My apologies, let your dream live on!

−1

Gagarin1961 t1_j5513sy wrote

Wouldn’t sites that Google and OpenAI crawl for data be very similar?

2

fiftythreefiftyfive t1_j553mrp wrote

You can ask it some pretty obscure things actually, for which you can be fairly sure that no prior content exists, and it’s still able to create new material. It’s not just regurgitating material.

Especially good at essays. Can make an essay about why x from your favorite anime is inherently evil or not, for example, choose a length, it’ll give you a coherent essay of approximately that length. It’s absolutely capable of connecting ideas (concepts and scenes from a show to the idea of “inherently evil”, for example - or in this case likely, something that it knows about meerkats to something it knows about poker- and connect the two in a manner that is normal for a joke based on its training.)

1

khamelean t1_j554iqc wrote

The point is that’s just regurgitating connections and associations that already exist in its data set. It cannot reason about those concepts to build new connections.

3

fiftythreefiftyfive t1_j555avr wrote

What would “reasoning” look like, to you? What more is there to reasoning than building appropriate chains of connections? That’s generally how logic argumentation works. And as said, it builds them very coherently.

2

echohole5 t1_j54xc9m wrote

Nope, it's creating actually new content that makes sense. It's not just copying shit. That joke didn't exist before.

It's is a real intelligence. It's an alien intelligence but it is an intelligence.

0

barneysfarm t1_j54xmzh wrote

The only way it "creates" new content is through amalgamation of existing knowledge and concepts.

It's not creative nor inspired, even if it may seem that way with limited observation.

9

feloncholy t1_j54yc8d wrote

Isn't that how humans create new content?

10

barneysfarm t1_j54yqye wrote

Not always. We have actual nueral pathways that can make novel connections and inspire truly new ideas.

It's rare but there are genesis points of new ideas throughout history.

At this point AI can only be trained on existing data, its not creating novel nueral connections that could result in original thought.

3

Gagarin1961 t1_j5517eq wrote

> Not always.

But a lot of times, yes? And we call that intelligence.

1

barneysfarm t1_j551p7l wrote

And? This is artificial intelligence. It's doing its best to replicate the most base level of intelligence, connecting existing ideas together, but it has no existing capabilities that would allow it to think for itself and create truly new concepts, without relying on direction from an actually sentient being.

1

fiftythreefiftyfive t1_j55337s wrote

“ At this point AI can only be trained on existing data, its not creating novel nueral connections that could result in original thought.”

Ah… no

AI also learns on feedback loop, and randomizes. So - it fosters a sense of what is “good”, based on feedback loop, and can create new things based on that feedback loop.

1

barneysfarm t1_j5537d0 wrote

It's not creating anything that doesn't already exist. Not at this point.

2

fiftythreefiftyfive t1_j5554an wrote

It is. Like, you can ask it for essays about extremely obsucure topics that likely no one ever wrote an essay on. Specify a length. Even on abstract topics - (whether some character from a not all too well known show is inherently evil or not). It’ll produce you a coherent answer, mention all the relevant scenes, you can adjust what position you want it to take or how long you want the essay to be etc…

What it’s strongest at currently, is the ability to tie ideas together - for example, scenes from a show and concepts (such as “inherently evil”). Hence why it’s particularly good at essays.

0

barneysfarm t1_j555el5 wrote

And it all depends on the user, the code, and the data it pulls from to make a response. It's not independently creative or intelligent, it is great at making people believe it is.

2

fiftythreefiftyfive t1_j556cn0 wrote

It’s not just making trees. That’s part of it, sure, but a big part of it is artificial neural networks (don’t mind the name, I don’t like it either) with feedback loops. You can think of it as a more efficient form of evolution - random modifications in its behavior that leads to changes in outcome, behavior that is then either encouraged or discouraged based on feedback (based on human input and if it’s well made, on self-testing). That’s part of the code. And that type of code is capable of creating new things, new solutions.

0

barneysfarm t1_j556xjt wrote

I dont disagree with you. The point I was trying to make in reply to the original comment is that it simply cannot be independently creative given that everything in its function depends on the inputs it receives from the user, the data it has to pull from, and sure, an evolving code base.

It's the same reason that yes it can string together existing thoughts from existing data into an essay, but it hasn't produced any novel ideas because it can only pull from existing data.

2

fiftythreefiftyfive t1_j5594u3 wrote

The point I’m trying to make is that this evolving code part is capable of creativity, or at least a very good imitation of it.

That’s the main thing distinguishing old chess/go bots from the new generation, which has become way, way stronger. The old bots essentially just did depth searches and then evaluated positions based on spoon fed human knowledge. This was a big hurdle for Go bots in particular, because depth searches are extremely computationally difficult with a board that large.

The new generation instead, plays millions of games against itself. It randomly changes its strategies over time. If it wins, it tells itself, “hey I won! Maybe that is worth remembering”, slightly changes it’s code accordingly and continues building from there.

These type of bots are capable of coming up with completely new strategies on their own. Again - not just through search trees, that’s completely infeasible for a game like go - but by modifying its own code incrementally until it knows how to play the game. And similar things can happen here, even to a lesser degree. Go/chess have the advantage of having very clear outlines of what “good” is - if you win the game, good, have your cookies continue just like that, sport. For essays etc… it’s a bit more vague - the best we have is user feedback, and you need some separate intelligent code to generate “feedback” on its own. But in this manner, it does something that is, imo, akin to “creativity”

1

barneysfarm t1_j55f5ul wrote

It still cannot do so independently. That's my point. It depends entirely on our collective knowledge to do any of that. It is not creative by itself.

1

fiftythreefiftyfive t1_j55g18k wrote

Neither do humans. People didn't suddenly produce great art work, from the flat medieval art to the quality we saw to the great renaissance art took centuries, generations of arrtists buiilding on each others small innovations. I think your expectations exceed what people are capable of.

1

barneysfarm t1_j55gbw1 wrote

Except for the fact that you can sit with no stimuli and still end up with outputs from your brain.

ChatGPT is entirely dependent on a creative user if it is going to make a creative output. It will not do so independently, which has been my entire point. It can only be perceived as creative because it relies on creative work and inputs from creative beings.

1

Queue_Bit t1_j552ekh wrote

This is more "humans are special because we're special" bullshit.

ChatGPT may not be sentient but it is absolutely intelligent.

−1

barneysfarm t1_j552yds wrote

Independently? No.

It's only as intelligent as the user.

2

splashdust t1_j54yx2h wrote

I mean, that how humans come up with ideas too. That’s not to say that ChatGPT is “creative”, but the way it comes up with answers is not entirely dissimilar to how humans does it. Technically speaking.

4

barneysfarm t1_j54z659 wrote

It's combing available data and making matches based on prompts and feedback.

The brain can actually make new connections that never existed before. All AI does at this point is spoof the brain, and its believeable enough but clearly not independently intelligent.

2

splashdust t1_j553nq2 wrote

> It's combing available data and making matches based on prompts and feedback.

Again, essentially what brains does. The brain actually spoofs itself into believing that you where the one who came up with the idea or thought. But actually it’s an autonomic process that happens well before you are aware of the outcome.

0

barneysfarm t1_j553zp9 wrote

Except the brain can actually derive new ideas independently, whereas this is software that depends upon prompts and rules to return output. It is not independently intelligent by any means, nor creative.

You can make the same argument for most people, myself included. But we are fortunate enough to be able to think outside of a prompt/response format, because we are not bounded by code.

3

splashdust t1_j5576hl wrote

I’m not disputing that human brains can derive new ideas independently, just saying that they do it in a way similar to large language models.

The human though process constantly loops back on itself, essentially creating its own prompts, and we have the means to evaluate the outcomes and determine it’s value to us. We can also feel something about it, which, of course, a language model can’t.

A tool like ChatGPT is essentially a brain expansion addon. Our brains only have so much capacity for information, and learning new information take a lot of work. Now we can outsource some of that, and we can still evaluate and feel our way to an end result, just as we would when it came from our own brain.

So I would argue that human interaction with ChatGPT still produces a creative outcome. One could argue that it is a less personal one, but depending on the situation that doesn’t necessarily matter.

2

barneysfarm t1_j557uqn wrote

I agree with you. And I can see the validity of the argument that you can have a creative outcome, primarily because you have a creative being interacting with the tool.

What I was trying to emphasize, in response to the original comment on this thread, is that it is not yet independently creative or intelligent. It relies on our intelligence and creativity. I could have expressed that better.

2

splashdust t1_j558n5d wrote

Yeah, I know. I got a bit carried away there. These kinds of things are just so much fun to think about! :D

2

yoyoman2 t1_j55344n wrote

ChatGPT or any one of these generative AI is not, technically, taking results and putting them together, instead they break them up to bits and send it with the learning algorithm through the network. ChatGPT(and others) can work without an internet connection and give the same results.

0

draculamilktoast t1_j550piw wrote

That's what counts as thinking for 99% of people so it's basically sentient.

−4

DrBimboo t1_j551l9e wrote

Its also not true. 'Quoting' is massively underselling it.

0