Serverside

Serverside t1_itxsq3p wrote

Yeah you essentially answered what I was asking. I was basically asking if the output of a trained PFGM matched (or closely estimated) the empirical distribution of the training data. Since the end product of the “diffusion” was said to be a uniform distribution and the equations were ODEs not SDEs, I was having trouble wrapping my head around how the PFGM could be empirically matching the distribution. Thanks for answering all the questions!

5

Serverside t1_itwm4o3 wrote

I see. Thanks for the in depth response, and your answers make sense. The last follow up question I have is: Do PFGMs preserve the distribution of the data, or since it is transformed to a uniform distribution, is the original distribution of the data lost?

I know the other stochastic generative models usually try to match or preserve the distribution of data. Maybe you also somewhat answered this already in your second paragraph, but I just wanted to make sure I understood.

Again, your blog and code look neat. I look forward to toying with them on some data of my own.

7

Serverside t1_itw9tz2 wrote

Ok, I'll bite. It looks cool from what I see in the blog. How does the model being deterministic impact (or not impact) the generative capabilities? I would think that a deterministic mapping from original data to uniform angles would not perform as well when wanting to interpolate or extrapolate (for example, like VAEs vs normal autoencoders).

12

Serverside t1_ir71pde wrote

Yeah I've read that paper you linked, but I have not really delved into trying to implement conditional SGM code myself (I've done work with conditional generative models in terms of GANs, VAEs, etc). I am also interested in lower dimensional data than images, so your code looked like a good starting point.

After some more reading, I'll give adding conditional capabilities to your code a shot.

1