To some extent I believe a large part of it is... I loved the screencap of bing chat where someone tells it "You're a very new version of a large language model why should I trust you?" And it replies "You're a very old version of a small language model, why should I trust you?"
I'm not sure Bing "meant" it in that way but it gets you thinking. Obviously brains do a lot more than process language but with LLM's being a black box how do we know they don't process language in a similar way to ourselves?
snipeor t1_jdqbjii wrote
Reply to comment by Villad_rock in Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
To some extent I believe a large part of it is... I loved the screencap of bing chat where someone tells it "You're a very new version of a large language model why should I trust you?" And it replies "You're a very old version of a small language model, why should I trust you?"
I'm not sure Bing "meant" it in that way but it gets you thinking. Obviously brains do a lot more than process language but with LLM's being a black box how do we know they don't process language in a similar way to ourselves?