Affectionate_Leg_686 t1_j8ebfju wrote
Reply to comment by GFrings in [D] Is a non-SOTA paper still good to publish if it has an interesting method that does have strong improvements over baselines (read text for more context)? Are there good examples of this kind of work being published? by orangelord234
I second this adding that "reviewer roulette" is now the norm in other research communities too. Some conferences are making an effort to impriove the reviewing process, e.g., ICML has metareviewers and an open back-and-forth discussion between the authors and the reviewers. Still, it has not solved the problem.
​
Regarding your work, If possible, define a metric that encapsulates accuracy vs. cost (memory and compute), show how this varies across different established models, and then use that as part of your case: why is your model much more "efficient" than the alternative of running X models in parallel.
In my experience, using a proxy metric for cost is preferable for the ML crowd. I mean something like operation counts and bits transferred. Of course, if you can measure time on existing hardware, say a GPU or CPU that would be best.
Good luck!
Viewing a single comment thread. View all comments