Andriyo t1_jedfs83 wrote
Reply to comment by Prestigious-Ad-761 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I'm not a specialist myself either but I gather what's difficult to understand the LLMs for humans is due to the fact that models are large, with many dimensions (features) and inference is probabilistic in some aspects (that's how they implement creativity). All that combined makes it hard to understand what's going on. But that's true for any large software system. It's not unique to LLMs.
I use word "understand" here in the meaning that one is capable to predict how software system would behave for a given input.
Viewing a single comment thread. View all comments