Submitted by Background-Loan681 t3_y6eth3 in singularity
4e_65_6f t1_isp73il wrote
This is not imagination, this is the most likely answer to the prompt "imagine something" in relation to the text data. It's evaluating probability of so and so text appearing, not obeying your commands.
Edit: In a sense, it could be considered similar to imagination since whatever text it is using as reference was written by someone who did imagine something, so in a way it's picking bits and parts of someone's insights into imagination but the engine itself isn't imagining anything on it's own.
Background-Loan681 OP t1_isp8sgf wrote
I'm not an expert or even that knowledgeable about this so... Can I ask you something?
What is the difference between how an AI would 'imagine something' and how a human would 'imagine something'
I would assume that both look up the relations of data they've gathered and usually they would come up with a picture in their head that is pleasant to imagine.
So... to what degree are humans and AIs similar in imagining stuff? And what are the major difference between how the two imagines stuff?
(Sorry for asking too much, I'm just curious about this since I don't know much about how AI works)
redwins t1_ispu1al wrote
Does the human race strike you as the beakon of Reason? GPT3 is as reasonable as the best of humans I would say. Imagination and Reason have always been overrated, or more precisely, we enjoy thinking too highly and pompously of ourselves. Isn't it just as exciting to think that the Universe is capable of producing us, with just a tad of luck in the soup of ingredients?
4e_65_6f t1_ispuk3v wrote
Well it would be comparable to someone asking you to imagine something and instead of doing it you formulate a text response most similar to what you'd expect someone who did imagine it would answer. I agree it's not an easy thing to distinguish it.
tooold4urcrap t1_ispfdo4 wrote
I think when we imagine something, it can be original. It can be abstract. It can be random. It can be something that doesn't make sense. I don't think anything with these AIs (is AI is even the right term? I'm guessing it's more of a search engine type thing) is like that. It's all whatever we've plugged into it. It can't 'imagine', it can only 'access what we've given it'.
Does that make sense? I'm pretty high.
AdditionalPizza t1_ispk568 wrote
>I'm guessing it's more of a search engine type thing)
It isn't, it's fed training data, and then that data is removed. It literally learns from the training data. Much like when I say the word river, you don't just imagine a river you saw in a google image search. You most likely think of a generic river that could be different the next time someone says the word river, or maybe it's a quick rough image of a river near your house you have driven by several times over the years. Really think about and examine what the first thing that pops into your head is. Do you think it's always the EXACT same, do you think it's very detailed? The AI learned what a river is from data sets, and understands when it "sees" a painting of a unique river, the same as you and me.
​
>It can't 'imagine', it can only 'access what we've given it'.
This is exactly what the op asked for an answer to. You say it can't imagine something, it just has access to the data it was given. How do humans work? If I tell you to imagine the colour "shlupange" you can't. You have no data on that. Again, I will stress, these transformer AI have zero saved data the way you're imagining it that it just searches up and combines it all for an answer. It does not have access to the training data. So how do we say "well it can't imagine things, because it can't..."
...Can't what? I'm not saying they're conscious or have the ability to imagine, I'm saying nobody actually knows 100% how these AI come to their conclusion outside of using probability for the best answer, which appears to be similar to how humans brains work when you really think about the basic process that happens in your brain. Transformers are a black box at a crucial step in their "imagination" that isn't understood yet.
When you're reading this, you naturally just follow along and understand the sentence. When I tell you something you instantly know what I'm saying. But it isn't instant, it actually takes a fraction of a second for you to process it. That process that happens, can you describe what happens in that quick moment? When I say the word cat, what exactly happened in your brain? What about turtle? Or forest fire? Or aardvark? I bet the last one tripped you up for a second. Did you notice your brain try and search something it thinks it might be? You had to try and remember your training data, but you don't have access to it so you probably try and make up some weird animal in your head.
Altruistic_Yellow387 t1_ispjzwo wrote
But even humans need sources to imagine things (they extrapolate on things they already known and have seen, nothing is truly original)
visarga t1_isq5mvf wrote
I believe there is no substantial difference. Both the AI and the brain transform noise into some conditional output. AIs can be original in the way they recombine things - there's space for adding a bit of originality there, and humans can be pretty reliant themselves on reusing other styles and concepts - so not as original as we like to imagine. Both humans and AIs are standing on the shoulders of giants. Intelligence was in the culture, not in the brain or AI.
UpsetRabbinator t1_istdga1 wrote
>This is not imagination,
4e_65_6f t1_istf1vm wrote
I didn't say it wasn't intelligence, just that it's not doing what OP asked it to.
If I told you to multiply 30*3 in your head, you could just remember the result is 90 and with no knowledge of multiplication answer based on the memory rather than doing the math.
The prompt was asking it to imagine and instead it is only worried about convincing the user that it did using text references, not actually performing the task.
[deleted] t1_isqcx6b wrote
[deleted]
Viewing a single comment thread. View all comments