Submitted by AutoModerator t3_11pgj86 in MachineLearning
darthstargazer t1_jcxqcgw wrote
Subject : Variational inference and genarative networks
I've been trying to grasp the ideas behind Variational auto encoders (Kingma et al) vs normalized flows (E.G RealNVP)
If someone can explain the link between the two I'd be thankful! Aren't they trying to do the same thing?
YouAgainShmidhoobuh t1_jd2qmh1 wrote
Not entirely the same thing. VAEs offer approximate likelihood estimation, but not exact. The difference here is key - VAEs do not optimize the log-likelihood directly but they do so through the evidence lower bound, an approximation. Flow based methods are exact methods - we go from an easy tractable distribution to a more complex one, guaranteeing at each level that the learned distribution is actually a legit distribution through the change of variables theorem.
Of course, the both (try) to learn some probability distribution of the training data, and that is how they would differ from GAN approaches that do not directly learn a probability distribution.
For more insight you might want to look at https://openreview.net/pdf?id=HklKEUUY_E
darthstargazer t1_jd4ts2d wrote
Awesome! Thanks for the explanation. "exact" vs "approximate"!
Viewing a single comment thread. View all comments