Submitted by SejaGentil t3_xyv3ht in MachineLearning
[deleted] t1_irk6vyz wrote
Reply to comment by SejaGentil in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
[deleted]
SejaGentil OP t1_irkfnwg wrote
So that's where we disagree, I'd say humans learn a lot with no supervision. Like, we pick up our first language with no teaching whatsoever, we just do.
I don't have anything in mind actually, I'm honestly just bothered that AI programs like GPT-3 have static weights. It would make a lot more sense to me if they learned from their own prompts. Imagine, for example, if GPT-3 could remember who I am? I actually thought that was how Lamda worked, for example, i.e., that it had memories of that Google developer. But yea, I guess that's just how things are made.
[deleted] t1_irkh4l5 wrote
[deleted]
SejaGentil OP t1_irko521 wrote
I see I see. Thanks for sharing your knowledge!
Viewing a single comment thread. View all comments