Prestigious-Ad-761 t1_jeb639j wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Did I say anywhere a blackbox was magic? I'm referring to the fact that with our current understanding, we can only with great difficulty infer why a neural network works well within a given task with the "shape" that it acquired from its training. And inferring it for each task/subtask/microsubtask it now has the capacity to achieve seems completely impossible, from what I understand.
But truly I'm an amateur, so I may well be talking out of my arse. Let me know if I am.
Andriyo t1_jedfs83 wrote
I'm not a specialist myself either but I gather what's difficult to understand the LLMs for humans is due to the fact that models are large, with many dimensions (features) and inference is probabilistic in some aspects (that's how they implement creativity). All that combined makes it hard to understand what's going on. But that's true for any large software system. It's not unique to LLMs.
I use word "understand" here in the meaning that one is capable to predict how software system would behave for a given input.
Viewing a single comment thread. View all comments