BeatLeJuce

BeatLeJuce t1_j6n6x9b wrote

Reply to comment by pfm11231 in [D] deepmind's ai vision by [deleted]

It looks at the screen. Your question indicate you're not well versed in AI. I'd advise you to read up more on fundamental deep learning techniques if you don't know what a CNN does.

1

BeatLeJuce t1_j6mlxjc wrote

your question is answered in the abstract itself ("using only pixels and game points as input"), and repeated multiple times in the text ("In our formulation, the agent’s policy π uses the same interface available to human players. It receives raw RGB pixel input x_t from the agent’s first-person perspective at timestep t, produces control actions a_t ∼ π simulating a gamepad, and receives game points ρt attained"). Did you even attempt to read the paper? The concrete architecture showing the CNN is also in Figure S10.

3

BeatLeJuce t1_j4m9pp6 wrote

Overall nice, but the article also uses some expressions without ever explaining them. For example: What is H100, and what is A100. Somewhere in the Article, it says that H100=RTX40 cards, somewhere else it says A100 is a RTX40 card. Which is which?

Also, what is TF32? It's an expression that appears in a paragraph without explanation.

0

BeatLeJuce t1_iveapvr wrote

to be fair, "4" wasn't an option you had as a reviewer, so it was either 3 or 5, and 5 is "slightly below acceptance threshold". So if you feel like there is a flaw in the paper (even one you could recover from in rebuttals), "3" was the natural vote to give. Personally, for most papers I rated "3", the authors could come back from it if they manage to address my concerns properly.

2

BeatLeJuce t1_iuzz1ku wrote

Layer norm is not about fitting better, but training more easily (activations don't explode which makes optimization more stable).

Is your list limited to "discoveries that are now used everywhere"? Because there are a lot things that would've made it onto your list if you'd compiled it at different points in time but are now discarded (i.e., i'd say they are fads). E.g. GANs.

Other things are currently hyped but it's not clear how they'll end up long term:

Diffusion models are another thing that are currently hot.

Combining Multimodal inputs, which I'd say are "clip-like things".

There's self-supervision as a topic as well (with "contrastive methods" having been a thing).

Federated learning is likely here to stay.

NeRF will likely have a lasting impact, too.

3

BeatLeJuce t1_isshboa wrote

I agree, though the one published in dedicated "software editions" is usually okay. But it's a question on what you're optimizing for. Scientific publications mostly optimize for good (or at least impressive) science and novelty, not software quality. But if you don't want to publish in scientific journals, why publish at all?

1

BeatLeJuce t1_isn6t8h wrote

What are you hoping to get out of this? Since you're not in academia anymore, why bother at all? Since for some reason you decided to do this, why not do it right. Your advisor seems to think that you have a chance to publish this at a good, community-relevant venue, which is heaps and bounds better than JOSS or JORS. Why, you ask? Well, a couple of reasons:

  1. Discoverability: I don't know who your end users are going to be, but I can almost guarantee you that they won't be reading JOSS or JORS. But they'll likely read their community's journals. Maybe even the OSS variants of it. So if you want to tell the world "look, I made something useful", don't publish in JOSS/JORS because you'll reach way more potential users by publishing in a journal your end users are going to actually read.

  2. Prestige: It will look so much better on every co-author's CV. You already have your PhD and don't need this right now, but your advisor likely cares because of this (and every other potential co-author). I mean, if you already have 10 NeurIPS publications, one JORS one might make you seem more well-rounded. Likewise if you're in Software Development now, it might actually be beneficial to demonstrate to employers that you're not just a theoretician. But in general, people in research will not take a JOSS publication as seriously.

  3. Valuation of your work: Very related to the previous point, but JOSS/JORS aren't were good research ends up. Scientifically, I'd rank it as low tier publication where you publish stuff that wasn't good enough to make it into a big journal. I.e., my first line of thinking would be "okay, the authors created something that wasn't good enough to make it into the software-edition of the journal in his field" (IME most ML adjacent fields have this). YMMV, this is just my very subjective and biased impression. I never actually checked out JOSS/JORS, but this is how I would judge this, and how I would assume others would judge this.

As others have said: if you just need a citeable artefact, there are quicker ways (arxiv or Zenodo). I see JORS/JOSS as a sort of middle-ground. It's nicer and better than just putting it on arxiv, but definitely not as impactful as a "proper" scientific publication.

14