Submitted by enryu42 t3_122ppu0 in MachineLearning
ghostfaceschiller t1_jds855j wrote
Reply to comment by BeautifulLazy5257 in [D] GPT4 and coding problems by enryu42
I would just look up ReAct, CoT(chain of thought), and LangChain Agents. Its pretty simple to implement
BeautifulLazy5257 t1_jdsr09g wrote
I was wondering if you knew the trick to ReAct without langchain.
For instance, memory is just passing the past conversations through the prompt as context. There's nothing programtic about it. You don't need the langchain library, you just have to craft the right prompt
I think that using langchain kind of obscures how the model is actually achieving the desired outputs.
Having models interact with pdfs ultimately is just turning a pdf into a string and passing the string as context while adding a prompt to help prime the model.
I'll look into CoT and look through the ReAct sourcecode, but I'm going to avoid the use of langchain for most stuff or even looking at ReAct documentation, since those docs are only going to tell me how to use those libraries and not tell me how to achieve the effect from scratch.
Edit:
This is a pretty clear overview of CoT. Very compelling as well.
https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html?m=1
I guess I'll start AB testing some prompts to breakdown problems and tool selections.
If you have any more input on particular prompts you've used, I'd be grateful.
Edit 2: https://www.youtube.com/watch?v=XV1RXLPIVlw&ab_channel=code_your_own_AI It can't get clearer than this. great video
[deleted] t1_jdswzfc wrote
[deleted]
Viewing a single comment thread. View all comments