Submitted by Singularian2501 t3_1215dbl in MachineLearning

Paper: https://arxiv.org/abs/2303.11366

Blog: https://nanothoughts.substack.com/p/reflecting-on-reflexion

Github: https://github.com/noahshinn024/reflexion-human-eval

Twitter: https://twitter.com/johnjnay/status/1639362071807549446?s=20

Abstract:

>Recent advancements in decision-making large language model (LLM) agents have demonstrated impressive performance across various benchmarks. However, these state-of-the-art approaches typically necessitate internal model fine-tuning, external model fine-tuning, or policy optimization over a defined state space. Implementing these methods can prove challenging due to the scarcity of high-quality training data or the lack of well-defined state space. Moreover, these agents do not possess certain qualities inherent to human decision-making processes, specifically the ability to learn from mistakes. Self-reflection allows humans to efficiently solve novel problems through a process of trial and error. Building on recent research, we propose Reflexion, an approach that endows an agent with dynamic memory and self-reflection capabilities to enhance its existing reasoning trace and task-specific action choice abilities. To achieve full automation, we introduce a straightforward yet effective heuristic that enables the agent to pinpoint hallucination instances, avoid repetition in action sequences, and, in some environments, construct an internal memory map of the given environment. To assess our approach, we evaluate the agent's ability to complete decision-making tasks in AlfWorld environments and knowledge-intensive, search-based question-and-answer tasks in HotPotQA environments. We observe success rates of 97% and 51%, respectively, and provide a discussion on the emergent property of self-reflection.

https://preview.redd.it/4myf8xso9spa1.png?width=1600&format=png&auto=webp&v=enabled&s=867a16e1114108053d08d4cdf41485c8b29a132c

https://preview.redd.it/bzupwyso9spa1.png?width=1600&format=png&auto=webp&v=enabled&s=95cacfe6b99756e7eed9ec8c40784f8c4cb94cee

https://preview.redd.it/009352to9spa1.jpg?width=1185&format=pjpg&auto=webp&v=enabled&s=5ccc52597d6e001c2ba754fc5f05afd1df09cd63

https://preview.redd.it/ef9ykzso9spa1.jpg?width=1074&format=pjpg&auto=webp&v=enabled&s=2701778aa5a9f3e80f683a1e3d0eaf0160928f54

241

Comments

You must log in or register to comment.

learn-deeply t1_jdl1bmp wrote

Anyone else tired of papers that obscure a simple concept with endless paragraphs of verbose gibberish? This 17 page could be a few sentences.

Tl;DR the authors wrote prompts to tell GPT-4 to fix code given some unit tests and the output of the broken code. It performs better than GPT-4 that doesn't have access to the output of the code execution.

https://github.com/noahshinn024/reflexion-human-eval/blob/main/reflexion.py#L7-L12

369

_Arsenie_Boca_ t1_jdlc2ah wrote

Thanks! If that is really the TL;DR, I have never seen an abstract that beats about the bush so much

69

nekize t1_jdldodi wrote

Sadly that is what academia came to. I am doing my phd and 80% od my papers is just padding. And if you don t follow the “template” you can t publish anything

61

artsybashev t1_jdlml1f wrote

Sounds like we need a LLM to generate padding for the academia and LLM to write the tldr for the readers. World is dumb.

46

danielbln t1_jdm967m wrote

26

artsybashev t1_jdmpwwd wrote

The fluffy overly complex writing around your main message has worked as a barrier or prefilter to filter out bad job candidates or unqualified contributions to scientific discussion. LLMs are destroying this part. Interesting to see what this leads to.

14

fnordstar t1_jdv0sl3 wrote

That just seems like elitism. Like rejecting someone for having an accent instead of speaking oxford english.

6

VelveteenAmbush t1_jdsjab4 wrote

Also an LLM to read all of the tldrs and tell me which of them I should pay attention to.

1

Fal_the_commentator t1_jdlo48r wrote

Good papers don't need to do that. If papers are self contained, no need for gibberish.

From my experience, it comes from when the paper is not planned before being written, or when results/methodology is either not refined or not interesting enough.

16

maskedpaki t1_jdlu3k1 wrote

well at least you can use gpt4 for padding now.

4

Normal_Antelope_2556 t1_jdlqc42 wrote

as a person who inspires to go into research in this field,how bad is it? Can people even do their own research?

2

nekize t1_jdlrqnt wrote

Of course you can. Depending in which group you end up, there is a lot of cool stuff being done outside of NLP and Computer vision (if you consider these two “solved”).

5

rsha256 t1_jdq13w4 wrote

What does CV have That makes it “solved”? Stable Diffusion?

1

learn-deeply t1_jdnkaw7 wrote

If you need to pad your paper, that means there hasn't been enough original research done.

1

ellev3n11 t1_jdp7evr wrote

That is not what the paper is about. The paper has nothing to do with code actually. Why are people here so obtuse?

11

pm_me_your_pay_slips t1_jdv6l50 wrote

while the paper doesn't mention any code, there is no practical difference: replace RL environment with compiler/interpreter, and action selection with prompt engineering.

2

farmingvillein t1_jdo16sz wrote

> This 17 page could be a few sentences.

> Tl;DR the authors wrote prompts to tell GPT-4 to fix code given some unit tests and the output of the broken code. It performs better than GPT-4 that doesn't have access to the output of the code execution.

I agree with your overall sentiment--the paper IMO could be, in the very least, substantially re-organized for clarity--but your summary isn't actually accurate, since the paper itself has nothing to do with coding(!).

The coding work is all in their blog post...

...which also suffers from the same issue: a long preamble to scroll down and find the core nugget.

10

gmork_13 t1_jdlpq90 wrote

Sometimes I feel like a toddler for doing it, but I always scroll to the images first and for most papers that’s the TLDR.

9

lego3410 t1_jdmi0hv wrote

Yes! But GPT-4 could summarize it for me.

1

massimosclaw2 t1_jdmvjlp wrote

When you haven’t done much, best to obscure it in some complicated language /s

1

noobgolang t1_jdpmeq9 wrote

Stop gate keeping researchhhh!!!! It is already that bad

1

AI-Pon3 t1_jdlgw1x wrote

Interesting methodology/technology. I realize it's GPT-4+ a refining process but even so, 88% is ~64% fewer errors than 67%, which proves it's a powerful technique even when the underlying model is already fairly capable.

25

addition t1_jdkssmg wrote

Wow! I was just thinking the other day, now that we have very advanced statistical models of the world the next step is some search algorithm + feedback loop. In other words, a way for the model to use its statistical understanding of the world to guide a search towards a solution while also updating itself along the way. This feels like an important step. Or at least the idea is the first step in this direction.

20

DiscussionGrouchy322 t1_jdmrq88 wrote

Wow so many words to try and say you're applying test driven design to prompt engineering. I will keep this as example of how not to write technical content. (I was reading the "blog post")

Maybe this is a joke posting that was also written by the chat gpt.

When you make those charts with the weights and things... Are they meant to convey information or do you just follow previous template where you saw information presented that way and you just try and match the shape?

8

Cherubin0 t1_jdlif7r wrote

Wow so we can hook it up with cargo --check and it will generate perfect Rust code.

5

3deal t1_jdkiao9 wrote

AI is growing faster than our capacity to adapt. We are doomed

1

Nyanraltotlapun t1_jdkkc6q wrote

There is no way for humans to adapt for alien intelligence. The idea of developing general AI is insanely horrifying from the beginning.

9

3deal t1_jdkmcrb wrote

We all know the issue, and we still running on the way.

11

t0slink t1_jdkq5c1 wrote

Nah, full speed ahead please. With enough development, a cure for cancer, aging, and all manner of devastating human ailments could happen in this decade.

It is senseless to cut off a pathway that could literally save and improve tens of billions of lives over the next few decades because you're scared it can't be done correctly.

18

sweatierorc t1_jdkt9uq wrote

A cure for cancer and aging in this decade. AI has gotten really good, but let's not get carried away.

19

SmLnine t1_jdlgtl8 wrote

If an intelligence explosion happens, there's really no telling what's possible. Maybe these problems are trivial to a 1 million IQ machine, maybe not. The only question really is if the explosion will happen. Two years ago I would have said 1% in the next ten years, now I'm up to 10%. Maybe in two more years it'll look like 30%.

12

sweatierorc t1_jdlhgay wrote

IMHO, I think that cancer and aging are necessary for complex organism. It is more likely that we solve cloning or build the first in vitro womb, than we are at deafeating cancer or aging.

−8

MINECRAFT_BIOLOGIST t1_jdlmzvv wrote

Well cloning and artificial wombs are basically done or very close, we just haven't applied it to humans due to ethical reasons. Six years ago there was already a very premature lamb kept alive in an artificial womb for four weeks.

As for cancer and aging...it seems increasingly clear that part of the process is just that genes necessary for development get dysregulated later on in life. I think the fact that we can rejuvenate our own cells by making sperm and eggs points to the fact that the dysregulation should be fixable, and recent advances in aging research seem to show that this is true. The issue is, of course, pushing that process too far and ending up with cells dedifferentiating or becoming cancerous, but I think it's possible if we're careful.

9

MarmonRzohr t1_jdlyfub wrote

>artificial wombs are basically done or very close

Bruh... put down the hopium pipe. There's a bit more work to be done there - especially if you think "artifical womb" as in from conception to term, not artifical womb as in device intended from prematurely born babies.

The second one was what was demonstrated with the lamb.

−1

nonotan t1_jdln1d9 wrote

We already know of complex organisms that essentially don't age, and also others that are cancer-free or close to it. In any case, "prevent any and all aging and cancer before it happens" is a stupid goalpost. "Be able to quickly and affordably detect, identify and treat arbitrary strains of cancer and/or symptoms of aging" is essentially "just as good", and frankly seems like it could well already be within the reach of current models if they had the adequate "bioengineering I/O" infrastructure, and fast & accurate bioengineering simulations to train on.

ML could plausibly help in getting those online sooner, but unless you take the philosophical stance that "if we just made AGI they'd be able to solve every problem we have, so everything is effectively an ML problem", it doesn't seem like it'd be fair to say the bottlenecks to solving either of those are even related to ML in the first place. It's essentially all a matter of bioengineering coming up with the tools required.

8

SmLnine t1_jdlwhtu wrote

>but unless you take the philosophical stance that "if we just made AGI they'd be able to solve every problem we have, so everything is effectively an ML problem", it doesn't seem like it'd be fair to say the bottlenecks to solving either of those are even related to ML in the first place. It's essentially all a matter of bioengineering coming up with the tools required.

We're currently using our brains (a general problem solver) to build bioengineering tools that can cheaply and easily edit the DNA of a living organism. 30 years ago this would have sounded like magic. But there's no magic here. This potential tool has always existed, we just didn't understand it.

It's possible that there are other tools in the table that we simply don't understand yet. Maybe what we've been doing the last 60 years is the bioengineering equivalent of bashing rocks together. Or maybe it's close to optimal. We don't know, and we can't know until we aim an intellectual superpower at it.

3

SmLnine t1_jdlxego wrote

There are complex mammals that effectively don't get cancer, and there are less complex animals and organisms that effectively don't age. So I'm curious what your opinion is based on.

2

MarmonRzohr t1_jdmj8th wrote

>There are complex mammals that effectively don't get cancer

You got a source for that ?

That's not true at all according everything I know, but maybe what I know is outdated.

AFAIK there are only mammals that seem to develop cancer much less than they should - namely large mamals like whales. Other than that every animal above and including Cnidaria deveop tumors. E.g. even the famously immortal Hydras develop tumors over time.

That's what makes cancer so tricky. There is good chance that far, far back in evolution there was a selection between longevity and rate of change or something else. Therefore may be nothing we can do to prevent cancer and can only hope for suppression / cures when / if it happens.

Again, this may be outdated.

1

sweatierorc t1_jdm83bv wrote

which one ? do they not get cancer or are they more resistant to it ?

0

SmLnine t1_jdmftzs wrote

I said "effectively" because a blanked statement would be unwarranted. There has probably been at least one naked mole rate in the history of the universe that got cancer.

https://www.cam.ac.uk/research/news/secrets-of-naked-mole-rat-cancer-resistance-unearthed

0

sweatierorc t1_jdmkacg wrote

Sure, humans under 40 are also very resistant to cancer. My point was that cancer comes with old age, and aging seems to be a way for us to die before cancer or dementia kill us. There are "weak" evidence that people who have dementia are less likely to get a cancer. I understand that some mammals like whales or elephant seems to be very resistant to cancer, but if we were to double or triple their average life expectancy, other disease may become more prevalent, maybe even cancer.

1

t0slink t1_jdkufvf wrote

> AI has gotten really good, but let’s not get carried away.

People were saying the same thing five years ago about the generative AI developments we've seen this year.

6

sweatierorc t1_jdlcwkm wrote

True, but with AI more computing power/data means better models. With medicine, things move slower. If we get a cure for one or two cancer this decade, it would be a massive achievement.

2

Art10001 t1_jdmff0b wrote

More intelligence, more time (AIs are at different time scales) = faster rate of discoveries

0

sweatierorc t1_jdmilbm wrote

Do we know that ? E.g. with quantum computing, we know that it won't really revolutionize our lives despite the fact that it can solve a new class of problem.

3

Art10001 t1_jdmyazo wrote

Quantum computing solves new types of problems, and their resolution, or findings from them, improves our lives.

2

meregizzardavowal t1_jdksro1 wrote

I don’t know if people are as much saying we should cut off the pathway because they are scared. What I’m hearing is they think we ought to spend more effort on ensuring it’s safe, because a Pandora’s box moment may come up quickly.

13

t0slink t1_jdlhf3s wrote

I wish you were right, but people are calling for investment in AGI to cease altogether:

> There is no way for humans to adapt for alien intelligence. The idea of developing general AI is insanely horrifying from the beginning.

One of the parent comments.

Such absolutist comments leave no room whatsoever for venturing into AGI.

1

comfytoday t1_jdljrdg wrote

I'm a little surprised at the seeming lack of any backlash, tbh. I'm sure it's coming though.

1

brucebay t1_jdlc3ix wrote

This is not an alien intelligence yet. We understand how it works how it thinks. But eventually this version can generate an AI that is harder for us to understand, and that version can generate another ai. At some point it will become alien to us because we may not understand the math behind jt,

4

WonderFactory t1_jdm1slk wrote

We don't understand how it works. We understand how it's trained but we don't really understand the result of the training and exactly how it arrives at a particular output. The trained model is an incredibly complex system.

4

SzilvasiPeter t1_jdudjj3 wrote

Well, our own body is alien to us. The brain, the gut, the endocrine system, and so on. There are emergent complexities everywhere from giant black holes to a pile of dirt. It is the same with conceptual things like math or computer science. Simple axioms and logic gates lead to beautiful complex systems.

I guess, we should get used to "not understanding" at this point.

1

Nyanraltotlapun t1_jdm0r15 wrote

>This is not an alien intelligence yet. We understand how it works how it thinks.

Its alien not because we don't understand It, but because It is not protein life form. It have nothing common with humans, It does not feel hunger, does not need sex, does not feel love or pain. It is metal plastic and silicone. It is something completely nonhuman that can think and reason. It is the true horror, wont you see?

>We understand how it works how it thinks

Sort of partially. And also, it is false to assume in general. Long story short, main property of complex systems is the ability to pretend and mimic. You cannot properly study something that can pretend and mimic.

0

Spud_M314 t1_jdlp71e wrote

Genetically alter the human brain to make more neocortical neurons and glia... That make brain more brainy, more gray matter, more smart stuff... A biological (human) superintelligence is more likely...

2

Jeffy29 t1_jdquk5v wrote

Literally doomsayer. I know I know “bUt ThIs TiMe iTs dIfFeRenT”. I am sure you guys will be right one day.

1

Puzzleheaded_Acadia1 t1_jdlw72w wrote

Can someone explain to me what this paper is about

1

yaosio t1_jdomvtr wrote

I think they give GPT-4 a task, GPT-4 attempts to complete it and is told if it worked or not, then GPT-4 looks at what happened and determines why it failed, and then tries again with this new knowledge. This is all done through natural language prompts, the model isn't being changed.

I saw somebody else in either this sub or /r/openai using a very similar method to get GPT-4 to write and deploy a webpage that could accept valid email addresses. Of course, I can't find it, and neither can Bing Chat, so maybe I dreamed it. I distinctly remember asking if it could do QA, and then the person asked what I meant, and I said have it check for bugs. I post a lot so I can't find it in my post history.

I remember the way it worked was they gave it the task, then GPT-4 would write out what it was going to do, what it predicted would happen, write the code, and then check if what it did worked. If it didn't work it would write out why it didn't work, plan again, then act again. So it went plan->predict->act->check->plan. This successfully worked as it went from nothing to a working and deployed webpage without any human intervention other than setting the task.

2

mrfreeman93 t1_jdnebrv wrote

I think it was aleady well known that it would fix its own errors when provided the error message, this is not a breakthrough

1

afreydoa t1_jdrvs8d wrote

I wonder if combining LLMs with planning would enhance the creation of poems or that example task, of creating sentences that end with a specific letter.

My thinking is that poem generation often struggles when the LLM can't find a suitable ending, as the initial part of the line or paragraph, is already locked and can't be altered. However, when directing ChatGPT to rework the response by modifying the starting point, it seems to often produce better outcomes.

1

SpaceCadetIowa t1_jdmcfga wrote

No need, the government makes up new ones to keep the people thinking we need them.

−1

RealSonZoo t1_jdkoq5c wrote

Question, maybe dumb - how are they comparing results to GPT-4, which isn't released yet, and I think is mostly closed source?

−6

metalman123 t1_jdkqd75 wrote

Gpt 4 is released......

24

RealSonZoo t1_jdkqjld wrote

Oh so if I go to the ChatGPT website and start talking with it, that's GPT-4?

−18

addition t1_jdkrd3s wrote

You need chatgpt plus to use 4 at the moment

15

metalman123 t1_jdkqv8i wrote

What rock have you been under?

The paid version has gpt 4 access. People have access to the gpt 4 api.

This is old information

14

Dry_Percentage_1399 t1_jds1o8b wrote

Really? I have paid to access gpt-4, but only by website. How can I use gpt-4 api?

1

throwaway957280 t1_jdl2cq3 wrote

If you pay for ChatGPT plus and manually select the new model, yes. By default, no.

11

tysam_and_co t1_jdkqv3e wrote

I would presume that it's a bolt-on external method that utilizes a pretrained model with its own inputs as a dynamically-generated information sieve of sorts. Of course, the inductive prior is encoded in the Reflexion algorithm itself so we are bringing some new information to the table here (not that GPT4+ couldn't somehow do this itself someday, either).

2

ertgbnm t1_jdkv8rw wrote

Umm wow! I recommend backing up this GitHub before it gets taken down for "safety"

−6