emad_eldeen

emad_eldeen t1_j5uvdma wrote

One way is to use data augmentation to increase the samples size.

The other way is also to use another dataset that can be available online with more samples, consider it as a source domain, and use it to train your CNN model. Then you can use either Transfer learning or semi-supervised domain adaptation to adapt the model to your target domain.

2

emad_eldeen t1_j2zhnx5 wrote

First, I'm sure that they read your reply, maybe also more than once, and I'm sure they come back many times to see if other reviewers responded or not ... this is human nature. But what doesn't make them reply, that's the question. The first possibility is that these reviewers just ask their students to do the review for them, and don't come back to them for reply. The second is that they had an initial thought about the paper, maybe they didn't like the ideas presentation or the writing, and they are not welling to change their feedback whatever you said. I'm not sure about the exact reason, but this is not in ICLR alone, maybe this is the public one, but this is the case in most rebuttals.

1

emad_eldeen t1_iwfhg0o wrote

Besides conferences policy as u/dojoteef mentioned, you may be reviewing for a journal that does not have an explicit rule for that. In this case, you may find some papers that have been in Arxiv for a long time and not published anywhere else, but they are considered as references to many, such as CPC for example. In this case, I guess it is Ok to ask to consider such papers.

However, if this is not the case and the paper is recent, it may not be a good idea. But eventually, it is left to your judgment as a reviewer; an expert in the domain.

You may also ask the authors to consider an Arxiv paper, but not make it the basis of your yes/no decision.

4