Viewing a single comment thread. View all comments

learn-deeply t1_jdl1bmp wrote

Anyone else tired of papers that obscure a simple concept with endless paragraphs of verbose gibberish? This 17 page could be a few sentences.

Tl;DR the authors wrote prompts to tell GPT-4 to fix code given some unit tests and the output of the broken code. It performs better than GPT-4 that doesn't have access to the output of the code execution.

https://github.com/noahshinn024/reflexion-human-eval/blob/main/reflexion.py#L7-L12

369

_Arsenie_Boca_ t1_jdlc2ah wrote

Thanks! If that is really the TL;DR, I have never seen an abstract that beats about the bush so much

69

nekize t1_jdldodi wrote

Sadly that is what academia came to. I am doing my phd and 80% od my papers is just padding. And if you don t follow the “template” you can t publish anything

61

artsybashev t1_jdlml1f wrote

Sounds like we need a LLM to generate padding for the academia and LLM to write the tldr for the readers. World is dumb.

46

danielbln t1_jdm967m wrote

26

artsybashev t1_jdmpwwd wrote

The fluffy overly complex writing around your main message has worked as a barrier or prefilter to filter out bad job candidates or unqualified contributions to scientific discussion. LLMs are destroying this part. Interesting to see what this leads to.

14

fnordstar t1_jdv0sl3 wrote

That just seems like elitism. Like rejecting someone for having an accent instead of speaking oxford english.

6

VelveteenAmbush t1_jdsjab4 wrote

Also an LLM to read all of the tldrs and tell me which of them I should pay attention to.

1

Fal_the_commentator t1_jdlo48r wrote

Good papers don't need to do that. If papers are self contained, no need for gibberish.

From my experience, it comes from when the paper is not planned before being written, or when results/methodology is either not refined or not interesting enough.

16

maskedpaki t1_jdlu3k1 wrote

well at least you can use gpt4 for padding now.

4

Normal_Antelope_2556 t1_jdlqc42 wrote

as a person who inspires to go into research in this field,how bad is it? Can people even do their own research?

2

nekize t1_jdlrqnt wrote

Of course you can. Depending in which group you end up, there is a lot of cool stuff being done outside of NLP and Computer vision (if you consider these two “solved”).

5

rsha256 t1_jdq13w4 wrote

What does CV have That makes it “solved”? Stable Diffusion?

1

learn-deeply t1_jdnkaw7 wrote

If you need to pad your paper, that means there hasn't been enough original research done.

1

ellev3n11 t1_jdp7evr wrote

That is not what the paper is about. The paper has nothing to do with code actually. Why are people here so obtuse?

11

pm_me_your_pay_slips t1_jdv6l50 wrote

while the paper doesn't mention any code, there is no practical difference: replace RL environment with compiler/interpreter, and action selection with prompt engineering.

2

farmingvillein t1_jdo16sz wrote

> This 17 page could be a few sentences.

> Tl;DR the authors wrote prompts to tell GPT-4 to fix code given some unit tests and the output of the broken code. It performs better than GPT-4 that doesn't have access to the output of the code execution.

I agree with your overall sentiment--the paper IMO could be, in the very least, substantially re-organized for clarity--but your summary isn't actually accurate, since the paper itself has nothing to do with coding(!).

The coding work is all in their blog post...

...which also suffers from the same issue: a long preamble to scroll down and find the core nugget.

10

gmork_13 t1_jdlpq90 wrote

Sometimes I feel like a toddler for doing it, but I always scroll to the images first and for most papers that’s the TLDR.

9

lego3410 t1_jdmi0hv wrote

Yes! But GPT-4 could summarize it for me.

1

massimosclaw2 t1_jdmvjlp wrote

When you haven’t done much, best to obscure it in some complicated language /s

1

noobgolang t1_jdpmeq9 wrote

Stop gate keeping researchhhh!!!! It is already that bad

1