SejaGentil
SejaGentil t1_itsy3wa wrote
Reply to comment by Shelfrock77 in Our Conscious Experience of the World Is But a Memory, Says New Theory by Shelfrock77
That makes a lot of sense to me. It would kinda imply that all our decisions are made by purely physical processes; that is, all our actions and movements are a result of electromagnetic interactions, just like computers, and we take no part on it. Instead, we're just "watching" it from the outside, in such a manner that it is extremely convincing that it is "us" who are making these decisions, but it isn't. In that interpretation, "outside" is another realm which we do not understand, and "us" is our real selves, which exist outside the physical universe. That would also imply some humans could possibly be watched by "0" beings, i.e., they're purely physical, like computers. At the same time, some humans could be watched by more than one being, even though they'd never suspect.
SejaGentil t1_it5tw4h wrote
Reply to Physicists Got a Quantum Computer to Work by Blasting It With the Fibonacci Sequence by Shelfrock77
have you tried logarithms?
SejaGentil OP t1_iruye6s wrote
Reply to comment by harharveryfunny in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
Just answering to thank you for all the info, I don't have any more question for now.
SejaGentil OP t1_irstl4q wrote
Reply to comment by harharveryfunny in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
Thanks for this overview, it makes a lot of sense. Do you have any ideas as to why GPT-3, DALL-E and the like are so bad at generating new insights and logical reasoning? My feeling is that these networks are very good at recalling, like a very dumb human that compensated it with a wikipedia-size memory. For example, if I attempt to prompt something like this on GPT-3:
This is a logical question. Answer it using exact, mathematical reasoning.
There are 3 boxes, A, B, C.
I take the following actions, in order:
- I put a 3 balls on box A.
- I move 1 ball from box A to box C.
- I swap the contents of box A and box B.
How many balls are on each box?
It will fail miserably. Trying to teach it any kind of programming logic is a complete failure, it isn't able to get very basic questions right. Asking step by step doesn't help. For me, the main goal of AGI is to be able to teach a computer how to prove theorems in a proof assistant like Agda, and let it be as apt as myself. But GPT-3 is as unapt as every other AI, and it seems like scaling won't do anything about that. That's why, to me, it feels like AI as a whole is making 0 progress towards (my concept of) AGI, even though it is doing amazing feats in other realms, and that's quite depressing. I use GPT-3 Codex a lot when coding, but only when I need to do some kind of repetitive trivial work, like converting formats. Anything that needs any sort of reasoning is out of its reach. Similarly, DALLE is completely unable to generate new image concepts (like a cowboy riding an ostrich, a cow with a duck beak...).
SejaGentil OP t1_irrtyrx wrote
Reply to comment by harharveryfunny in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
If I may ask a last question, why layers? Why not a graph where each neuron may interact with each other neuron, exactly like the brain? Of course not all edges need to exist, each neuron could have just a few connections to keep the number of synapses controlled; the point is to eliminate the layering, which looks artificial.
SejaGentil OP t1_irq5qyz wrote
Reply to comment by harharveryfunny in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
I see. I understand all that you said, thanks for the info. I do disagree with the way things are done and feel a little sceptic about our whole approach now, but, of course, not being part of the field, my opinion doesn't matter at all. At least now I understand it from your point of view.
SejaGentil OP t1_irku09b wrote
Reply to comment by suflaj in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
Thanks for all the information. It was very helpful and I believe I better understand the whole thing now.
SejaGentil OP t1_irko521 wrote
Reply to comment by [deleted] in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
I see I see. Thanks for sharing your knowledge!
SejaGentil OP t1_irkfnwg wrote
Reply to comment by [deleted] in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
So that's where we disagree, I'd say humans learn a lot with no supervision. Like, we pick up our first language with no teaching whatsoever, we just do.
I don't have anything in mind actually, I'm honestly just bothered that AI programs like GPT-3 have static weights. It would make a lot more sense to me if they learned from their own prompts. Imagine, for example, if GPT-3 could remember who I am? I actually thought that was how Lamda worked, for example, i.e., that it had memories of that Google developer. But yea, I guess that's just how things are made.
SejaGentil OP t1_irk65sz wrote
Reply to comment by [deleted] in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
That doesn't make sense to me. I don't think we're speaking the same language. I absolutely understand that is how it works, but why would it? Humans learn and adjust their synaptic weights. That is fundamental for us to function as intelligent beings. It is fundamentally impossible for an AGI to be just a static set of weights that isn't updated, as it won't learn. Humans don't need any labelling to learn, why would deep neural networks?
SejaGentil OP t1_irk0r98 wrote
Reply to comment by [deleted] in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
Yes I understand that, my point is that if it was continuously learning, the prompt would have no limit as it would learn from your previous prompts. Thus you could teach it way more complex concepts than what you can do with the limit of the prompt. You could guide it to learn a domain specific problem, and then get it to help you with insights and answers. That's not possible right now, since you can't fit an entire field or a complex problem in a single prompt.
SejaGentil OP t1_irk0l6q wrote
Reply to comment by asterfield in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
Why you need a notion of correctness? I thought language models learned from data, no labelling involved. So, as you chat with the AI, it would update its weights (which, if I understand correctly, aren't updated at all when you use GPT-3). My question is more like, why aren't the weights updated as the AI interacts? That's what happens in real brains, that's how we form memories. Imagine trying to reason, study, learn or do anything useful with 0 memory?
SejaGentil t1_irirwko wrote
Reply to comment by DungeonsAndDradis in When do you think we'll have AGI, if at all? by intergalacticskyline
what are those?
SejaGentil t1_irdj74r wrote
Reply to comment by Tanglemix in We are in the midst of the biggest technological revolution in history and people have no idea by DriftingKing
Word to Image is just the first thing that works though. In the future we will probably have way more sophisticated and precise tools. Stable Diffusion's image to image for example. And its reverse prompt feature allow you to load yourself into the AI and make exact copies of you in any pose you want, no need for words. So I kind of not agree that's the limitation of AIs, it isn't. I do think AIs are very limited on reasoning and, ironically, creativity. They can't create new concepts that weren't done before. Like, if you ask DALL-E to create a dragon, it will. If you ask it to create a city, it will. Mix the two and the results will be awful. The dragon will never mix with the city well enough. Similarly, GPT-3 will gladly tell you the answer to any sophisticated question... that you can find on Wikipedia. Now, ask it to solve the simplest problem that it has no memory of yet, and it will fail miserably. Honestly these technologies feel like the most stupid human to ever be born, who compensated it with a memory the size of Earth, who memorized the entire Wikipedia.
SejaGentil t1_iuo37m9 wrote
Reply to A comprehensive list of the most impactful AI advances in October. by SpaceDepix
Amazing initiative