Jean-Porte t1_j6x8oyx wrote
Reply to comment by alpha-meta in [D] Why do LLMs like InstructGPT and LLM use RL to instead of supervised learning to learn from the user-ranked examples? by alpha-meta
Yes but the LM has to take many steps to produce the text
We need to train the LM to maximize a far-away reward and we need RL to do that
alpha-meta OP t1_j6xylk8 wrote
Could you help me understand what the far-away rewards represent here in this context? The steps are generating the individual words? So in this case you mean words that occur early in the text? In this case, a weighting scheme for the cross-entropy loss components could be used?
Jean-Porte t1_j6y0djg wrote
The beginning of the best possible answer might not be the best beginning. It's the final outcome, the complete answer that counts, so it makes sense to evaluate that. The reward is the feedback on the complete answer.
alpha-meta OP t1_j6yud7x wrote
Ah yes, I see what you mean now, thanks!
Viewing a single comment thread. View all comments