Andriyo t1_je8qj9c wrote
Reply to comment by StevenVincentOne in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I wouldn't call it a blackbox how it operates - it's just tensor operations some linear algebra, nothing magic.
Franimall t1_je905k3 wrote
We know how neurons work, but that doesn't mean we understand consciousness. It's the immense complexity and scale of the structure that makes up the black box, not the mechanism.
Prestigious-Ad-761 t1_jeb639j wrote
Did I say anywhere a blackbox was magic? I'm referring to the fact that with our current understanding, we can only with great difficulty infer why a neural network works well within a given task with the "shape" that it acquired from its training. And inferring it for each task/subtask/microsubtask it now has the capacity to achieve seems completely impossible, from what I understand.
But truly I'm an amateur, so I may well be talking out of my arse. Let me know if I am.
Andriyo t1_jedfs83 wrote
I'm not a specialist myself either but I gather what's difficult to understand the LLMs for humans is due to the fact that models are large, with many dimensions (features) and inference is probabilistic in some aspects (that's how they implement creativity). All that combined makes it hard to understand what's going on. But that's true for any large software system. It's not unique to LLMs.
I use word "understand" here in the meaning that one is capable to predict how software system would behave for a given input.
Viewing a single comment thread. View all comments