Submitted by eugene129 t3_11lgo2d in deeplearning
Comments
wutheringsouls t1_jbf9vd1 wrote
Do you mean at different time steps?
mr_birrd t1_jbfe325 wrote
yes t is the timestep index
activatedgeek t1_jbfjafv wrote
From (1), it looks like V(x_t)
is the conditional variance of x_t
given x_{t-1}
(for the forward process defined by q
).
[deleted] t1_jbhjv7j wrote
I don’t get the last question when V(x_t) = 1 means that beta_t = 1
Why the confusion?
eugene129 OP t1_jbwdfl2 wrote
As fas as I know, N(Xt ; ... , BtI) means that the V(Xt) = Bt, But if it is so, the equation two equation in the picture seems to be contradictory.
[deleted] t1_jbd46q1 wrote
[deleted]
[deleted] t1_jbdgwpj wrote
[deleted]
rkstgr t1_jbia52h wrote
First of all, beta_t is just some predefined variance schedule (in literature often linear interpolated between 1e-2 and 1e-4) and it defines the variance of the noise that is added at step t. What you have in (1) is the variance of sample x_t which does not have to be beta_t.
What does hold for large t is var(x_t)=1 as our sample converges to ~ Normal Gaussian with mean 0 and var 1.
[deleted] t1_jblfd4j wrote
[removed]
eugene129 OP t1_jbwd8i7 wrote
So... N(Xt ; ... , BtI) doesn't mean that the V(Xt) = Bt ?
mr_birrd t1_jbf8sdi wrote
The variance of your sample x_t, simple as that.