Viewing a single comment thread. View all comments

BeautifulLazy5257 t1_jdsr09g wrote

I was wondering if you knew the trick to ReAct without langchain.

For instance, memory is just passing the past conversations through the prompt as context. There's nothing programtic about it. You don't need the langchain library, you just have to craft the right prompt

I think that using langchain kind of obscures how the model is actually achieving the desired outputs.

Having models interact with pdfs ultimately is just turning a pdf into a string and passing the string as context while adding a prompt to help prime the model.

I'll look into CoT and look through the ReAct sourcecode, but I'm going to avoid the use of langchain for most stuff or even looking at ReAct documentation, since those docs are only going to tell me how to use those libraries and not tell me how to achieve the effect from scratch.

Edit:

This is a pretty clear overview of CoT. Very compelling as well.

https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html?m=1

I guess I'll start AB testing some prompts to breakdown problems and tool selections.

If you have any more input on particular prompts you've used, I'd be grateful.

Edit 2: https://www.youtube.com/watch?v=XV1RXLPIVlw&ab_channel=code_your_own_AI It can't get clearer than this. great video

4