Submitted by AlmightySnoo t3_117iqtp in MachineLearning

One of the things in current publications that completely irritates me is people just forcing the use of GANs where they are not even needed nor suited at all, just to ride on the hype of generative AI.

These guys usually have samples (x_1, y_1=phi(x_1)), ..., (x_n, y_n=phi(x_n)) of a random pair (X, Y=phi(X)) where phi is some unknown target function (ie in fancy-pants math we know that Y is sigma(X)-measurable). A direct way to solve this is to treat it naturally as a regression problem and use your usual ML/DL toolkit. These guys however think that they can make the problem look sexier if they introduce GANs. For instance, they'd train a GAN taking X as an input and through the discriminator have the generator output something that has the same distribution as Y=phi(X). Some will even add some random noise z , that has nothing to do with X, to the inputs of the generator despite knowing that X is already enough to fully determine Y. GANs would have been useful if we didn't have joint observations of X and Y but that is not the case here.

One of the papers I have in mind is this one: https://openreview.net/pdf?id=SDD5n1888

How on earth are these papers getting accepted? To me that is literally just plagiarism of what's already available (physics-informed NNs in that case) by adding a totally useless layer (the GAN) to make it seem like this is a novel approach. That paper is only one of many cases. I know of a professor actively using that same technique to get cheap articles where he just replaces a standard regression NN in an old paper found online by a totally unjustified GAN. IMO reviewers at these journals/conferences need to be more mindful of this kind of plagiarism/low-effort submission.

31

Comments

You must log in or register to comment.

Borrowedshorts t1_j9cy0ui wrote

It's not plagiarism. Novelty and plagiarism are two separate concepts.

38

vikumwijekoon97 t1_j9fj2dd wrote

I have a feeling that this is some sort of a "have to do" research rather than "want to do" research. They prolly did it just to finish their degree.

3

Agreeable-Run-9152 t1_j9c4naa wrote

Yeah i actually agree with your rant. However there is a small Chance they acted in good faith and did Not see that the randomness in the GAN wont do anything.

22

Mefaso t1_j9dbyoz wrote

>However there is a small Chance they acted in good faith and did Not see that the randomness in the GAN wont do anything.

Why is the default assumption malice?

Especially if the only benefit would be a workshop paper

16

Agreeable-Run-9152 t1_j9dh1pu wrote

I would assume that someone who is capable of programming a GAN and go through all the steps of Parameter Tuning at some Point should realize that the randomness shouldnt do anything.

10

Mefaso t1_j9dbox6 wrote

>IMO reviewers at these journals/conferences need to be more mindful of this kind of plagiarism/low-effort submission.

Workshops in general have a very low bar, this surely wouldn't have been published in the main track.

Other than that I don't really see the point of this rant.

Yes there are a lot of bad papers, there are a lot of bad papers even in the main tracks, you just kind of get used to it.

It feels a lot like hitting down a well. Maybe these are some undergraduates doing their first research project and it's more about learning the methodologies and writing rather than very novel approaches.

16

huehue12132 t1_j9e9xqf wrote

GANs can be useful as alternative/additional loss functions. E.g. the original pix2pix paper: https://arxiv.org/abs/1611.07004 Here, they have pairs (X, Y) available, so they could just train this as a regression task directly. However, they found better results using L1 loss plus a GAN loss.

Keep in mind that using something like squared error loss has a ton of assumptions underlying it (if you interpret training as maximum likelihood estimation) such as outputs being conditionally independent and following a Gaussian distribution. A GAN discriminator can represent a more complex/more appropriate loss function.

Note, I'm not saying that a lot of these papers might not add anything of value, but there are reasons to use GANs even if you have known input-output pairs.

15

[deleted] t1_j9f1cgt wrote

[removed]

2

notdelet t1_j9g627c wrote

> Assuming Gaussianity and then using maximum likelihood gives yields an L2 error minimization problem.

Incorrect, only true if you fix the scale parameter. I normally wouldn't nitpick like this but your unnecessary usage of bold made me.

> (if you interpret training as maximum likelihood estimation)

> a squared loss does not "hide a Gaussian assumption".

It does... if you interpret training as (conditional) MLE. Give me a non-Gaussian distribution with an MLE estimator that yields MSE loss. Also, residuals are explicitly not orthogonal projections whenever the variables are dependent.

0

notdelet t1_j9gija3 wrote

In the future know that blocking someone after replying to them prevents them from responding to your reply. This means you are giving the false impression I am not responding to you (but can) to those who are not one of us.

1

Optimal-Asshole t1_j9c20cy wrote

I think these workshops accept every submission that is not incoherent or desk rejected.

From my quick glance, It doesn’t seem like plagiarism, since they do ample citation. As far as the justification goes, there are some generative based approaches for solving parametric PDEs even now. It doesn’t seem like the best paper ever, but I don’t think it’s that bad.

14

AlmightySnoo OP t1_j9c2trd wrote

>It doesn’t seem like plagiarism, since they do ample citation.

It is when you are pretending to do things differently while in practice you do the exact same thing and add a useless layer (the GAN) to give the false impression of novelty. Merely citing sources in such cases doesn't shield you from being accused of plagiarism.

>As far as the justification goes, there are some generative based approaches for solving parametric PDEs even now.

Not disputing that there might be papers out there where the use is justified, of course there are skilled researchers with academic integrity. But again, in this paper, and the ones I'm talking about in general, the setting is exactly as in my 2nd paragraph, where the use of GANs is clearly not justified at all.

>but I don’t think it’s that bad

Again, in the context of my second paragraph (because that's literally what they're doing), it is bad.

−17

Optimal-Asshole t1_j9c4h8d wrote

Okay lol so I’m actually researching kinda similar things and I assumed this paper was related because it used similar tools but upon a closer look, nope nvm. It’s not even using the generative model for anything useful.

So their paper just shows that the basic idea of least squares PDE solving can be used for generative models. Okay now it’s average class project tier. I guess this demonstrates that yes these workshops accept literally anything.

Edit: it’s still not plagiarism. It’s just not very novel. Plagiarism is stealing ideas without credit. What they did was discuss an existing idea and extend it in a very small way experimentally only. Not plagiarism.

14

vikumwijekoon97 t1_j9fho30 wrote

I was looking into similar things in my undergrad thesis. My math wasn't great so I couldn't comprehend much. Are there actual NN methods that can PDEs without depending on the initial conditions? I was looking into soft body physics simulation using gpus.

2

Optimal-Asshole t1_j9fktzg wrote

> Are there actual NN methods that can PDEs without depending on the initial conditions?

The initial condition needs to be known (but we can actually have some noisy initial condition, like measurements corrupted by noise [1]), but NN based models can efficiently solve some parametric PDEs faster than traditional solvers. [2]

There is also a lot of work in training NNs on data generated from traditional methods, and this can be combined jointly with the above method to solve a whole class of problems at once. [3]

Solving a whole parametric family of PDEs (i.e. a parameterized family of initial conditions) and handling complicated geometries will be the next avenue of this specific field IMO. Actually it is being actively worked on.

[1] https://arxiv.org/abs/2205.07331

[2] https://arxiv.org/abs/2110.13361

[3] https://arxiv.org/abs/2111.03794

1

AlmightySnoo OP t1_j9c56bx wrote

>It’s not even using the generative model for anything useful.

Thank you, that's literally what I meant in my second paragraph. They're literally training the GAN to learn Dirac distributions. The noise has no use, and the discriminator eventually ends up learning to do roughly the job of a simple squared loss.

−6

[deleted] t1_j9ddu1v wrote

I have a hard time being mad about people trying every different combination of everything, it's good to know if it works better or not. At some point however it's just bloat, and may make it more difficult to do research.

If I was trying to publish papers myself it might get to me.

9

johnnydaggers t1_j9dmhhv wrote

I have been doing my best to beat them back in peer-review, but I can only do so much...

5

Algoartist t1_j9qxce6 wrote

It's a workshop. Also some events just accept every paper. Concerns only relevant for serious conferences and journals. Also authors are from Pakistan and they go for quantity over quality

2

pyepyepie t1_j9evz4c wrote

Personally, I think plagiarism is a terrible word to use in this case. I also don't like this shaming of young researchers who seem to come with good intentions. That being said, I don't particularly enjoy reading ML papers. I feel I learn more from Math and ML books and only from papers I need for my work or classics.

1