SejaGentil

SejaGentil t1_itsy3wa wrote

That makes a lot of sense to me. It would kinda imply that all our decisions are made by purely physical processes; that is, all our actions and movements are a result of electromagnetic interactions, just like computers, and we take no part on it. Instead, we're just "watching" it from the outside, in such a manner that it is extremely convincing that it is "us" who are making these decisions, but it isn't. In that interpretation, "outside" is another realm which we do not understand, and "us" is our real selves, which exist outside the physical universe. That would also imply some humans could possibly be watched by "0" beings, i.e., they're purely physical, like computers. At the same time, some humans could be watched by more than one being, even though they'd never suspect.

32

SejaGentil OP t1_irstl4q wrote

Thanks for this overview, it makes a lot of sense. Do you have any ideas as to why GPT-3, DALL-E and the like are so bad at generating new insights and logical reasoning? My feeling is that these networks are very good at recalling, like a very dumb human that compensated it with a wikipedia-size memory. For example, if I attempt to prompt something like this on GPT-3:

This is a logical question. Answer it using exact, mathematical reasoning.

There are 3 boxes, A, B, C.
I take the following actions, in order:
- I put a 3 balls on box A.
- I move 1 ball from box A to box C.
- I swap the contents of box A and box B.
How many balls are on each box?

It will fail miserably. Trying to teach it any kind of programming logic is a complete failure, it isn't able to get very basic questions right. Asking step by step doesn't help. For me, the main goal of AGI is to be able to teach a computer how to prove theorems in a proof assistant like Agda, and let it be as apt as myself. But GPT-3 is as unapt as every other AI, and it seems like scaling won't do anything about that. That's why, to me, it feels like AI as a whole is making 0 progress towards (my concept of) AGI, even though it is doing amazing feats in other realms, and that's quite depressing. I use GPT-3 Codex a lot when coding, but only when I need to do some kind of repetitive trivial work, like converting formats. Anything that needs any sort of reasoning is out of its reach. Similarly, DALLE is completely unable to generate new image concepts (like a cowboy riding an ostrich, a cow with a duck beak...).

1

SejaGentil OP t1_irrtyrx wrote

If I may ask a last question, why layers? Why not a graph where each neuron may interact with each other neuron, exactly like the brain? Of course not all edges need to exist, each neuron could have just a few connections to keep the number of synapses controlled; the point is to eliminate the layering, which looks artificial.

1

SejaGentil OP t1_irq5qyz wrote

I see. I understand all that you said, thanks for the info. I do disagree with the way things are done and feel a little sceptic about our whole approach now, but, of course, not being part of the field, my opinion doesn't matter at all. At least now I understand it from your point of view.

1

SejaGentil OP t1_irkfnwg wrote

So that's where we disagree, I'd say humans learn a lot with no supervision. Like, we pick up our first language with no teaching whatsoever, we just do.

I don't have anything in mind actually, I'm honestly just bothered that AI programs like GPT-3 have static weights. It would make a lot more sense to me if they learned from their own prompts. Imagine, for example, if GPT-3 could remember who I am? I actually thought that was how Lamda worked, for example, i.e., that it had memories of that Google developer. But yea, I guess that's just how things are made.

1

SejaGentil OP t1_irk65sz wrote

That doesn't make sense to me. I don't think we're speaking the same language. I absolutely understand that is how it works, but why would it? Humans learn and adjust their synaptic weights. That is fundamental for us to function as intelligent beings. It is fundamentally impossible for an AGI to be just a static set of weights that isn't updated, as it won't learn. Humans don't need any labelling to learn, why would deep neural networks?

0

SejaGentil OP t1_irk0r98 wrote

Yes I understand that, my point is that if it was continuously learning, the prompt would have no limit as it would learn from your previous prompts. Thus you could teach it way more complex concepts than what you can do with the limit of the prompt. You could guide it to learn a domain specific problem, and then get it to help you with insights and answers. That's not possible right now, since you can't fit an entire field or a complex problem in a single prompt.

−1

SejaGentil OP t1_irk0l6q wrote

Why you need a notion of correctness? I thought language models learned from data, no labelling involved. So, as you chat with the AI, it would update its weights (which, if I understand correctly, aren't updated at all when you use GPT-3). My question is more like, why aren't the weights updated as the AI interacts? That's what happens in real brains, that's how we form memories. Imagine trying to reason, study, learn or do anything useful with 0 memory?

1

SejaGentil t1_irdj74r wrote

Word to Image is just the first thing that works though. In the future we will probably have way more sophisticated and precise tools. Stable Diffusion's image to image for example. And its reverse prompt feature allow you to load yourself into the AI and make exact copies of you in any pose you want, no need for words. So I kind of not agree that's the limitation of AIs, it isn't. I do think AIs are very limited on reasoning and, ironically, creativity. They can't create new concepts that weren't done before. Like, if you ask DALL-E to create a dragon, it will. If you ask it to create a city, it will. Mix the two and the results will be awful. The dragon will never mix with the city well enough. Similarly, GPT-3 will gladly tell you the answer to any sophisticated question... that you can find on Wikipedia. Now, ask it to solve the simplest problem that it has no memory of yet, and it will fail miserably. Honestly these technologies feel like the most stupid human to ever be born, who compensated it with a memory the size of Earth, who memorized the entire Wikipedia.

6