Viewing a single comment thread. View all comments

nixed9 t1_jdrxr76 wrote

a Reflexion loop asks the model to react to it's own output and critique it before giving you an additional answer.

Edit: (In the paper, it provides a loop like this which feeds back into itself to help it's own cognition. It can repeat this loop multiple times.)

You can do a mini-loop by prompting. I've been playing with this all day.

I prompt it like this:

> "For this interaction, we are going to use the following structure.

> User (me): [I will ask a topic or question]

> You will provide an Assistant Hypothetical Response: [Brief or simplified answer to the topic or question]

> Then you will undergo Agent Reflection: [You will provide a Critique of the hypothetical response, highlighting the limitations, inaccuracies, or areas that need improvement or expansion, while providing guidance on how to address these issues in the revised response]

> Then you will provide an Actual Response: [The natural and contextually appropriate answer to the topic or question, as generated by the advanced language model, which incorporates the suggestions and improvements from the agent reflection for a more comprehensive and accurate response. This also can include step-by-step reasoning.]

> Do you understand?"

125

Hamoodzstyle t1_jdsbhec wrote

What is the point of the "do you understand?" At the end? Does the model confirming that it understand add some sort of emphasis or something?

34

CobaltAlchemist t1_jdsqr5e wrote

(not op) I've found that asking it directly if it understands helps to bridge any gaps I miss. It's asked me clarifying questions afterward in the past that I hadnt thought about

Alternatively, when I assume it understands sometimes it comes up with some real wild stuff because I wasn't clear

80

Nowado t1_jdtr40r wrote

I do the same thing I'd do with a human: ask it to repeat and rephrase instructions. After that I'm sure and it has multiple forms of instruction available to get less hanged up on some exact wording.

12

nixed9 t1_jdsegt9 wrote

No explicit purpose. other than to respond with “yes I am ready”

53

DirtyKinkyInLove t1_jdwlgmp wrote

It also reduces token usage. If the chatbot has a wordy response, it takes up more space in the context window and the chatbot will forget its instructions sooner. If sounds like gibberish, let me know and I'll break it down.

3

farmingvillein t1_jdsmdt2 wrote

  1. This isn't really an accurate summary of the Reflexion paper. As noted in the other post:

> Eh, I must've misunderstood the paper. It sounded like they were asking GPT4 to create unit tests, execute the code, and then update its answer based on the results of those unit tests.

This version is correct.

  1. However, if I do the above and I throw in a semi-random Beginner problem that failed in OP's original pass-through, it successfully builds the answer.

u/enryu42 -- if you care to take things forward, I'd try implementing Reflexion (either with the underlying codebase (https://github.com/noahshinn024/reflexion-human-eval/) or just manual prompt work.

Or if you can provide a link to the problems in copy-pastable text form (manually coercing the math notation is a little painful), since you presumably already did this, it would greatly accelerate others hopping on analysis.

The fact that I immediately saw improvement on a randomly-selected (Beginner) problem suggests that there is a bunch of upward room here.

27

enryu42 OP t1_jdsokwz wrote

Interesting! Here are the scraped and auto-converted statements (formatting is off sometimes, especially in the sample tests, but understandable). Prefixes are: "abc" for beginner, "arc" for regular, "agc" for "grand".

I do believe that the "Beginner" ones can be improved, but it'll be interesting to see what happens on "Grand" (or even "Regular"), as they require coming up with some ideas before writing the code.

7

farmingvillein t1_jdspflq wrote

So, don't know whether this actually makes a difference, but I'd review the overall post-conversion text.

E.g.: https://github.com/enryu43/llm_coding/blob/main/atcoder_eval/statements/statement_abc293_b.txt

You'll see that it represent "K" and "N" wrong here (in sample 1, 15 versus 5, 12 versus 2).

Certainly, as a human, I would find this confusing. Maybe you could get some automated robustness by telling it how you converted the text (as it might automatically adjust its "expectations" on interpreting the numbers). Obviously, the fairer comparison though would just be to fix this.

> as they require coming up with some ideas before writing the code.

The other thing I'd note--

Not sure whether you're using the API directly, but if I play around with these in ChatGPT, I often run into the context window and have to nurse it along to complete text. I'd make sure that however you're running things, you're giving it enough "space" to iterate (particularly if you use any reflection techniques).

6

nixed9 t1_jdt1xyp wrote

Ok my bad but that’s how I’ve been using the reflexion prompting

1

muskoxnotverydirty t1_jds07qr wrote

Eh, I must've misunderstood the paper. It sounded like they were asking GPT4 to create unit tests, execute the code, and then update its answer based on the results of those unit tests.

10

farmingvillein t1_jdsm0hw wrote

No, you didn't misunderstand it--your understanding is correct. OP is giving an answer that is similar to part of the Reflexion paper, but not the entirety.

15

yaosio t1_jdtenqi wrote

What's it called if you have it self-reflect on non-code it's written? For example, have it write a story, and then tell it to critique and fix problems in the story. Can the methods from the paper also be used for non-code uses? It would be interesting to see how much it's writing quality can improve using applicable methods.

3

AllAmericanBreakfast t1_jdtynpv wrote

I tried this out, and it only had partial success.

First, just dumping in this prompt, then asking a question, resulted in the AI coming up with a laughably simple failed first response, followed by a critique and improvement. It is as if it recognized that the easiest way to "demonstrate improvement" would be to set the bar low by failing utterly on the first attempt.

Then, I tried breaking it up into stages, asking for a response, getting a response, asking for a critique, getting a critique, asking for an improvement, and getting an improvement.

This worked better.

However, when I tried asking for a critique and then an improvement (again in separate stages), it instead started inventing fake problems to solve. I was asking it to implement a case-insensitive longest common substring function, and to return the version of the LCS in the longer of the two strings.

The second-pass critique was that the original (working) code didn't deal with the possibilty that "the longer string may not contain the LCS", which is impossible given the way it was originally implemented. Then it added some extra code to deal with this "problem."

3

xjE4644Eyc t1_jdtefph wrote

Thank you for this, what an novel way to approach the problem. I’m going to start using this regularly

1