pyepyepie t1_j8e7gjp wrote
Reply to comment by bballerkt7 in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Thanks :) I agree it's useful but I don't see how it's related to AGI. Additionally, it was already done a long time ago, many "AI" agents used the internet before. I feel that the real challenge is to control language models using structured data, perform planning, etc., not to use language models to interact with the world (which seems trivial to me, sorry), but of course, it's just my opinion - which is probably not even that smart.
VelveteenAmbush t1_j8fusa5 wrote
> I feel that the real challenge is to control language models using structured data, perform planning, etc.
I think the promise of tool-equipped LLMs is that these tools may be able to serve that sort of purpose (as well as, like, being calculators and running wikipedia queries). Could imagine an LLM using a database module as a long-term memory, to keep a list of instrumental goals, etc.. You could even give it access to a module that lets it fine-tune itself or create successor LLMs in some manner. All very speculative of course.
bballerkt7 t1_j8eddln wrote
No worries I think you definitely have a valid take. I always feel not smart talking about AI stuff lol :)
farmingvillein t1_j8frv87 wrote
> not to use language models to interact with the world (which seems trivial to me, sorry),
The best argument here is that "true" intelligent requires "embedded" agents, i.e., agents that can interact with our (or, at least, "a") world (to learn).
Obviously, no one actually knows what will make AGI work, if anything...but it isn't a unique/fringe view OP is suggesting.
Viewing a single comment thread. View all comments