Submitted by AlmightySnoo t3_117iqtp in MachineLearning
Optimal-Asshole t1_j9c4h8d wrote
Reply to comment by AlmightySnoo in [D] On papers forcing the use of GANs where it is not relevant by AlmightySnoo
Okay lol so I’m actually researching kinda similar things and I assumed this paper was related because it used similar tools but upon a closer look, nope nvm. It’s not even using the generative model for anything useful.
So their paper just shows that the basic idea of least squares PDE solving can be used for generative models. Okay now it’s average class project tier. I guess this demonstrates that yes these workshops accept literally anything.
Edit: it’s still not plagiarism. It’s just not very novel. Plagiarism is stealing ideas without credit. What they did was discuss an existing idea and extend it in a very small way experimentally only. Not plagiarism.
vikumwijekoon97 t1_j9fho30 wrote
I was looking into similar things in my undergrad thesis. My math wasn't great so I couldn't comprehend much. Are there actual NN methods that can PDEs without depending on the initial conditions? I was looking into soft body physics simulation using gpus.
Optimal-Asshole t1_j9fktzg wrote
> Are there actual NN methods that can PDEs without depending on the initial conditions?
The initial condition needs to be known (but we can actually have some noisy initial condition, like measurements corrupted by noise [1]), but NN based models can efficiently solve some parametric PDEs faster than traditional solvers. [2]
There is also a lot of work in training NNs on data generated from traditional methods, and this can be combined jointly with the above method to solve a whole class of problems at once. [3]
Solving a whole parametric family of PDEs (i.e. a parameterized family of initial conditions) and handling complicated geometries will be the next avenue of this specific field IMO. Actually it is being actively worked on.
[1] https://arxiv.org/abs/2205.07331
vikumwijekoon97 t1_j9fma3t wrote
That's pretty awesome
AlmightySnoo OP t1_j9c56bx wrote
>It’s not even using the generative model for anything useful.
Thank you, that's literally what I meant in my second paragraph. They're literally training the GAN to learn Dirac distributions. The noise has no use, and the discriminator eventually ends up learning to do roughly the job of a simple squared loss.
Viewing a single comment thread. View all comments