Submitted by AutoModerator t3_11pgj86 in MachineLearning
LeN3rd t1_jcgu1z5 wrote
Reply to comment by Batteredcode in [D] Simple Questions Thread by AutoModerator
This is possible in multiple ways. Old methods for this would be to view this as an inverse problem and apply some optimization method to it, like ADMM or FISTA.
If lots of data is missing (in your case the complete R&G channels) you should use a neural network for this. You are on the right track, though it could get hairy. If you have a prior (You have a dataset and you want it to work on similar images), a (cycle) GAN, or a retrained Stable diffusion model could work.
I am unsure about VAEs for your problem, since you usually train them by having the same input and output. You shouldn't enforce the latent to be only the blue channel, since the the encoder is useless. Training only the decoder site is essentially what GANs and diffusion networks do so i would start there.
Batteredcode t1_jci3t9m wrote
Great, thank you so much for a detailed answer. Do you have anything you could point me to (or explain further) about how I could modify a diffusion method to do this?
Also, in terms of the VAE, I was thinking I'd be able to feed 2 channels in and train it to output 3 channels, I believe the encoder wouldn't be useless in this case and hence my latent would be more than merely the missing channel? Feel free to correct me if I'm wrong! My assumption is that even with this a NN may well perform better, or at least a simpler baseline. That said, my images will be similar in certain ways, so being able to model a distribution of the latents could prove useful presumably?
LeN3rd t1_jcitswg wrote
The problem with your VAE idea is, that you cannot apply the usual loss function of having the difference between the input and the output, and thous a lot of nice theoretical constraints go out of the window afaik.
https://jaan.io/what-is-variational-autoencoder-vae-tutorial/
​
I would start with a cycleGAN:
https://machinelearningmastery.com/what-is-cyclegan/
Its a little older, but i personally know it a bit better than diffusion methods.
​
With the free to use StableDiffusion model you could use it to conditionally inpaint on your image, though you would have to describe what is on that image in text. You could also train your own diffusion model, though you need a lot of training time. Not necessarily more than a GAN, but still.
It works by adding noise to an image, and then denoising it again and again. For inpainting you just do that for the regions you want to inpaint (your R and G channel), and for the regions you wanna stay the same as your original image, you just take the noise that you already know.
Batteredcode t1_jcllc74 wrote
Thank you this is really helpful, I think you're right that the cycle GAN is the way to go!
Viewing a single comment thread. View all comments