Viewing a single comment thread. View all comments

Batteredcode t1_jci3t9m wrote

Great, thank you so much for a detailed answer. Do you have anything you could point me to (or explain further) about how I could modify a diffusion method to do this?
Also, in terms of the VAE, I was thinking I'd be able to feed 2 channels in and train it to output 3 channels, I believe the encoder wouldn't be useless in this case and hence my latent would be more than merely the missing channel? Feel free to correct me if I'm wrong! My assumption is that even with this a NN may well perform better, or at least a simpler baseline. That said, my images will be similar in certain ways, so being able to model a distribution of the latents could prove useful presumably?

1

LeN3rd t1_jcitswg wrote

The problem with your VAE idea is, that you cannot apply the usual loss function of having the difference between the input and the output, and thous a lot of nice theoretical constraints go out of the window afaik.

https://jaan.io/what-is-variational-autoencoder-vae-tutorial/

​

I would start with a cycleGAN:

https://machinelearningmastery.com/what-is-cyclegan/

Its a little older, but i personally know it a bit better than diffusion methods.

​

With the free to use StableDiffusion model you could use it to conditionally inpaint on your image, though you would have to describe what is on that image in text. You could also train your own diffusion model, though you need a lot of training time. Not necessarily more than a GAN, but still.

It works by adding noise to an image, and then denoising it again and again. For inpainting you just do that for the regions you want to inpaint (your R and G channel), and for the regions you wanna stay the same as your original image, you just take the noise that you already know.

1

Batteredcode t1_jcllc74 wrote

Thank you this is really helpful, I think you're right that the cycle GAN is the way to go!

1