Submitted by [deleted] t3_115ez2r in MachineLearning
kromem t1_j93b7pf wrote
Reply to comment by loga_rhythmic in [D] Please stop by [deleted]
Additionally, What Learning Algorithm Is In-Context Learning? Investigations with Linear Models from the other week literally just showed that transformer models are creating internal complexity beyond what was previously thought and reverse engineering mini-models that represent untaught procedural steps in achieving the results.
So if a transformer taught to replicate math is creating internal mini-models that replicate unlearned mathematical processes in achieving that result, how sure are we that a transformer tasked with recreating human thought as expressed in language isn't internally creating some degree of parallel processing of human experience and emotional states?
This is research that's less than two weeks old that seems pretty relevant to the discussion, but my guess is that nearly zero of the "it's just autocomplete bro" crowd has any clue that the research exists and I'm doubtful could even make their way through the paper if they did.
There's some serious Dunning-Kreuger going on with people thinking that dismissing expressed emotional stress by a LLM transformer somehow automatically puts them on the right side of the curve.
It doesn't, and I'm often reminded of Socrates' words when seeing people so self-assured on what's going on inside the black box of a hundred billion parameters transformer:
> Well, I am certainly wiser than this man. It is only too likely that neither of us has any knowledge to boast of; but he thinks that he knows something which he does not know, whereas I am quite conscious of my ignorance.
Viewing a single comment thread. View all comments