Submitted by randomkolmogorov t3_zf25ue in MachineLearning
UnusualClimberBear t1_iza00fp wrote
Reply to comment by randomkolmogorov in [Discussion] Suggestions on Trust Region Methods For Natural Gradient by randomkolmogorov
TRPO is often too slow for applications because of that line search and researchers often prefer to use PPO, which also has some guarantees in terms of KL on the state distribution and is faster. I'd be curious to hear about your problem if it ends up that TRPO is the best choice.
randomkolmogorov OP t1_iza7hwl wrote
I am not really doing RL but rather aleatoric uncertainty quantification where I need to optimize over a manifold of functions. My distributions are much more manageable than if I were doing policy gradient so I have a feeling that with some cleverness it might be possible to sidestep a lot of the complications in TRPO but use the same ideas in the paper.
Viewing a single comment thread. View all comments