Submitted by Emergency_Apricot_77 t3_zmd6l8 in MachineLearning
Been experimenting with language models a lot lately and wondering if human generated text (i.e. "natural" text) is really supposed to be maximally likely according to language models even after training. For example, has someone checked likelihood of human translated text to likelihood of machine translated text according to a language model like GPT-3 ?
​
Are there any works that do this already ? Does this idea even make sense to begin with ?
dojoteef t1_j0ayqqq wrote
See the graphs in the paper that introduced nucleus sampling: The Curious Case of Neural Text Degeneration. They visualize how human authored text has different statistical properties from machine generated text. That's mainly a tradeoff between fluency and coherence. Sampling procedures like top-k or nucleus sampling restrict the tokens that can be emitted and thus introduce statistical bias in the generated text, but produce more fluent text. Rather, sampling from the full distribution gets closer to the distribution of human-authored text, but often degenerates into incoherence (hence the title of the paper).