dojoteef

dojoteef t1_iwf3rdk wrote

It depends. Most conferences specifically state their policy in relation to preprints. For example NeurIPS states:

> What is the policy on comparisons to recent work? Papers appearing less than two months before the submission deadline are generally considered concurrent to NeurIPS submissions. Authors are not expected to compare to work that appeared only a month or two before the deadline.

> Are arxiv papers also subject to the policy above? Yes, we do not distinguish arxiv papers and other published (conference & journal) papers, and the two-month rule applies in the same way. More nuanced judgements, including how to determine the date of publication, should be made by the area chair handling the submission.

and ICLR has a similar policy:

> Q: Are authors expected to cite and compare with very recent work? What about non peer-reviewed (e.g., ArXiv) papers? (updated on 7 November 2022)

> A: We consider papers contemporaneous if they are published (available in online proceedings) within the last four months. That means, since our full paper deadline is September 28, if a paper was published (i.e., at a peer-reviewed venue) on or after May 28, 2022, authors are not required to compare their own work to that paper. Authors are encouraged to cite and discuss all relevant papers, but they may be excused for not knowing about papers not published in peer-reviewed conference proceedings or journals, which includes papers exclusively available on arXiv. Reviewers are encouraged to use their own good judgement and, if in doubt, discuss with their area chair.

You should check the policy for the conference you are reviewing for/submitting to for more relevant instructions.

26

dojoteef t1_ivyftve wrote

There is a lot of uncertainty out there for sure. I know people who've done multiple internships with the same company over the course of their PhD and are worried they might not receive a full-time offer when they graduate. I'm not sure if their concerns are founded or not, but that's their current outlook.

61

dojoteef t1_ivou9rr wrote

No one can answer that question, since not all possible output images are equally probable (some are even impossible given trained network weights). You might be able to make an empirical estimate, but enumerating the true output space of any sufficiently complex NN is an open problem.

8

dojoteef t1_iv0hfoe wrote

Slightly off-topic: I'm a huge John Carmack fan, but he isn't the author of that code. It's just part of engine code that his company released for the game Quake 3 Arena. For details, check out:

https://www.beyond3d.com/content/articles/8/

8

dojoteef t1_iupqxxr wrote

It seems most commenters are pointing out reasoning why the proposed setup seems deficient in one way or the other.

But the point of the research is to highlight potential blind spots even in seemingly "superhuman" models, even if the failure modes are weird edge cases that are not broadly applicable.

By first identifying the gaps, mitigation strategies can be devised that make training more robust. In that sense, the research is quite useful even if a knowledgable GO player might not be impressed by the demonstrations highlighted in the paper.

90

dojoteef t1_iuk3vq6 wrote

I've been using Typescript recently for some of my research and the speed is so much faster compared to Python. Additionally, the type system is much nicer than Python's type annotations.

I'm glad to see some diversification in the ML space. And while I know it's not the focus of the project, it might be nice to have a subset that runs in the browser. It can help in making client-side apps that require ML.

5

dojoteef t1_itv72az wrote

Since being able to see the names of other reviewers doesn't imply authors can guess your identity, the more troubling possibility is that a reviewer is collaborating with the authors of that paper.

Did you reach out to the meta-reviewer and the chairs? That should be the first thing you do when you run into such a situation.

65

dojoteef t1_iri92a7 wrote

I got my start doing research exactly like this. At the time I was looking to apply to grad schools, but had been working in industry for years without any formal ml training. I found a professor who posted about needing collaborators their first year and so I applied.

Turns out the letter of recommendation I received from him was pivotal to my acceptance by my current advisor, since it demonstrated recent research experience, despite having great industry letters of recommendation talking about my abilities as a lead engineer.

I'm sure anyone you collaborate with will be very appreciative of your efforts.

5

dojoteef t1_irgxlt4 wrote

I presume you worked with a faculty member on this paper. If so, you should discuss these questions with them. Most will pay for your conference expenses (travel, accommodation, and daily expenses like food). Their grants usually include funding for conference expenses.

2