phillythompson t1_ja4sny6 wrote
Reply to comment by Really_McNamington in Why the development of artificial general intelligence could be the most dangerous new arms race since nuclear weapons by jamesj
It’s not confidence that they are similar at all. There is potential, that’s what I’m saying — and folks like yourself a the once being overconfident that “the current AI / LLM are definitely not smart or thinking.”
I’ve yet to see a reason why we’d dismiss the idea that these LLMs aren’t similar to our own thinking or even intelligent. That’s my piint
Really_McNamington t1_ja4vs33 wrote
Look, I'm reasonably confident that there will eventually be some sort of thinking machines. I definitely don't believe it's substrate dependent. That said, nothing we're currently doing suggests we're on the right path. Fairly simple algorithms output bullshit from a large dataset. No intentional stance, to borrow from Dennett, means no path to strong AI.
I'm as materialist as they come, but we're nowhere remotely close and LLMs are not the bridge.
phillythompson t1_ja4xclz wrote
I’m struggling to see how you’re so confident that we aren’t on a path or close.
First, LLMs are neural nets— as our our brains. Second, one could make the argument that humans take in data and output “bullshit”.
So I guess I’m trying to see how we are different given what we’ve seen thus far. I’m again not claiming we are the same, but I am not finding anything showing why we’d be different.
Does that make sense? I guess it seems like your making a concrete claim of “these LLMs aren’t thinking, and it’s certain” and I’m saying, “how can we know that they aren’t similar to us? What evidence is there to show that?”
Really_McNamington t1_ja6vq1o wrote
Bold claim that we actually know how our brains work. Neurologists will be excited to hear that we've cracked it. The ongoing work at openworm suggests there may still be some hurdles.
To my broader claim, chatgpt3 is just a massively complex version of Eliza. It has no self-generated semantic content. There's no mechanism at all by which it can know what it's doing. Even though I don't know how I'm thinking, I know I'm doing it. LLMs just can't do that and I don't see a route to it becoming an emergent thing via this route.
Viewing a single comment thread. View all comments