Submitted by SejaGentil t3_xyv3ht in MachineLearning
[deleted] t1_irk31r2 wrote
Reply to comment by SejaGentil in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
[deleted]
visarga t1_irloorr wrote
> You can just split a large text to parts and feed each one of them
This won't capture long range interactions between passages or care about their ordering.
[deleted] t1_irlwwz6 wrote
[deleted]
SejaGentil OP t1_irk65sz wrote
That doesn't make sense to me. I don't think we're speaking the same language. I absolutely understand that is how it works, but why would it? Humans learn and adjust their synaptic weights. That is fundamental for us to function as intelligent beings. It is fundamentally impossible for an AGI to be just a static set of weights that isn't updated, as it won't learn. Humans don't need any labelling to learn, why would deep neural networks?
[deleted] t1_irk6vyz wrote
[deleted]
SejaGentil OP t1_irkfnwg wrote
So that's where we disagree, I'd say humans learn a lot with no supervision. Like, we pick up our first language with no teaching whatsoever, we just do.
I don't have anything in mind actually, I'm honestly just bothered that AI programs like GPT-3 have static weights. It would make a lot more sense to me if they learned from their own prompts. Imagine, for example, if GPT-3 could remember who I am? I actually thought that was how Lamda worked, for example, i.e., that it had memories of that Google developer. But yea, I guess that's just how things are made.
[deleted] t1_irkh4l5 wrote
[deleted]
SejaGentil OP t1_irko521 wrote
I see I see. Thanks for sharing your knowledge!
Viewing a single comment thread. View all comments