emad_eldeen
emad_eldeen t1_j5uvkvw wrote
Reply to comment by FastestLearner in [R] Best service for scientific paper correction by Meddhouib10
Grammerly is great!
emad_eldeen t1_j5uvdma wrote
Reply to Classify dataset with only 100 images? by Murii_
One way is to use data augmentation to increase the samples size.
The other way is also to use another dataset that can be available online with more samples, consider it as a source domain, and use it to train your CNN model. Then you can use either Transfer learning or semi-supervised domain adaptation to adapt the model to your target domain.
emad_eldeen t1_j2zhnx5 wrote
Reply to comment by wang422003 in [D]There is still no discussion nor response under my ICLR submission after two months. What do you think I should do? by minogame
First, I'm sure that they read your reply, maybe also more than once, and I'm sure they come back many times to see if other reviewers responded or not ... this is human nature. But what doesn't make them reply, that's the question. The first possibility is that these reviewers just ask their students to do the review for them, and don't come back to them for reply. The second is that they had an initial thought about the paper, maybe they didn't like the ideas presentation or the writing, and they are not welling to change their feedback whatever you said. I'm not sure about the exact reason, but this is not in ICLR alone, maybe this is the public one, but this is the case in most rebuttals.
emad_eldeen t1_j2zg827 wrote
Reply to [D]There is still no discussion nor response under my ICLR submission after two months. What do you think I should do? by minogame
if it is borderline or less, and there is another possible venue to submit, withdraw and re-submit. if none, just wait ... maybe the AE is fair enough to read your reply and give you good judgement.
emad_eldeen t1_ixh95fx wrote
Reply to How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
I think this falls under incremental learning, where you seek to learn from the new dataset without forgetting the old classes.
emad_eldeen t1_iwfhg0o wrote
Reply to [D] Is it legitimate for reviewers to ask you compare with papers that are not peer-reviewed? by Blasphemer666
Besides conferences policy as u/dojoteef mentioned, you may be reviewing for a journal that does not have an explicit rule for that. In this case, you may find some papers that have been in Arxiv for a long time and not published anywhere else, but they are considered as references to many, such as CPC for example. In this case, I guess it is Ok to ask to consider such papers.
However, if this is not the case and the paper is recent, it may not be a good idea. But eventually, it is left to your judgment as a reviewer; an expert in the domain.
You may also ask the authors to consider an Arxiv paper, but not make it the basis of your yes/no decision.
emad_eldeen t1_ivw9rd5 wrote
There's no rule of thumb, but usually, you use less learning rate in fine-tuning than the one used in pretraining.
emad_eldeen t1_j5uw5sp wrote
Reply to Efficient way to tune a network by changing hyperparameters? by NinjaUnlikely6343
Wandb is the best! https://wandb.ai/
Check the hyperparameter sweep option. It is FANTASTIC!
you can set range/values for each hyperparameter and let it run.