Viewing a single comment thread. View all comments

pyepyepie t1_j9uanug wrote

In all honesty, at some point, any type of evaluation that is not qualitative is simply a joke. I have observed it a long time ago while working on NMT and trying to base the results on BLEU score - it literally meant nothing. Trying to force new metrics based on simple rules or computation will probably fail - I believe we need humans or stronger LLMs in the loop. E.g., humans should rank the output of multiple LLMs and the same humans should do so for multiple different language models, not just for the new one. Otherwise, I view it as a meaningless self-promoting paper (LLMs are not interesting enough to read about if there are no new ideas and no better performance). Entropy is good for language models that are like "me language model me no understand world difficult hard", not GPT-3 like.

Edit: this semantic uncertainty looks interesting but I would still rather let humans rank the results.

8

_atswi_ OP t1_j9ukzlk wrote

That's a good point

What sounds like an open problem statement is how to get these LLMs to "quantify" that themselves the same way humans do. It's also interesting how that relates to the broader question of sentience and consciousness.

1