ghostfaceschiller

ghostfaceschiller t1_je84v6j wrote

You are talking about two different things here.

  1. Reflexion/ReAct uses another system (like LangChain) to allow the bot to genuinely loop back over previous results to try and improve them. This indeed ends up getting you better results or outcomes in the end

  2. You can also simply tell the bot, in ur prompt, something like "before you respond, review your first draft for errors, and only output your second draft". Now, this is not what the bot will actually do, but regardless, this will often result in higher quality output, presumably bc in the training data that kind of phrase is typically associated with a certain type of answer (IE: better answers)

5

ghostfaceschiller t1_jdz6vzn wrote

I think this was shown awhile ago (like a week ago, which just feels like ten years)

While I do think this is important for several reasons, personally I don't see it as all that impactful for what I consider AI capable of going forward.

That's bc pretty much all my assumptions for the next couple years are based on the idea of systems that can loop and reflect on their own actions, re-edit code based on error messages, etc. Which they are very good at

76

ghostfaceschiller t1_jdsnev1 wrote

Reply to comment by blose1 in [D] GPT4 and coding problems by enryu42

This line of thinking sounds sillier and sillier every week. Its like talking to someone who has had their eyes shut and fingers in their ears for the last two months.

EDIT: and tbc, i'm not trying to argue that it isn't statistics-based/trained on the internet/etc. I'm saying that it turns out that kind of system is powerful & capable than we ever would have intuitively thought it would be

−1

ghostfaceschiller t1_jds202e wrote

Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42

Yeah it's essentially that at an automated level. Tbh it is powerful enough based on results so far that would actually be really surprised if it did not yield very significant gains in these tests.

I'm sure there will be a paper out doing it in like the next few days, so we'll see

15

ghostfaceschiller t1_jdenoo2 wrote

Reply to comment by BigDoooer in [N] ChatGPT plugins by Singularian2501

Here's a standalone product which is a chatbot with a memory. But look at LangChain for several ways to implement the same thing.

The basic idea is: periodically feed your conversation history to the embeddings API and save the embeddings to a local vectorstore, which is the "long-term memory". Then, any time you send a message or question to the bot, first send that message to embeddings API (super cheap and fast), run a local comparison, and prepend any relevant contextual info ("memories") to your prompt as it gets sent to the bot.

14

ghostfaceschiller t1_jdekpke wrote

Reply to comment by psdwizzard in [N] ChatGPT plugins by Singularian2501

Trivially easy to build using the embeddings api, already a bunch of 3rd party tools that give you this. I’d be surprised if it doesn’t exist as one of the default tools within a week of the initial rollout.

EDIT: OK yeah it does already exist a part of the initial rollout - https://github.com/openai/chatgpt-retrieval-plugin#memory-feature

14

ghostfaceschiller t1_jax0gd8 wrote

17

ghostfaceschiller t1_j6ej1pw wrote

I recently put together a repo of character frequency analyses, bc they can be really useful when designing keyboard layouts. So I have an eye out rn for interesting ways to look at and visualize the data. I think this particular instance is probably too limited to be useful for keyboard layouts, but if you do anything more please let me know! It’s one of the more interesting visualizations I’ve seen so I’d love to include/link it

https://github.com/dschil138/word-and-character-frequencies

2