BeautifulLazy5257
BeautifulLazy5257 t1_jdsr09g wrote
Reply to comment by ghostfaceschiller in [D] GPT4 and coding problems by enryu42
I was wondering if you knew the trick to ReAct without langchain.
For instance, memory is just passing the past conversations through the prompt as context. There's nothing programtic about it. You don't need the langchain library, you just have to craft the right prompt
I think that using langchain kind of obscures how the model is actually achieving the desired outputs.
Having models interact with pdfs ultimately is just turning a pdf into a string and passing the string as context while adding a prompt to help prime the model.
I'll look into CoT and look through the ReAct sourcecode, but I'm going to avoid the use of langchain for most stuff or even looking at ReAct documentation, since those docs are only going to tell me how to use those libraries and not tell me how to achieve the effect from scratch.
Edit:
This is a pretty clear overview of CoT. Very compelling as well.
https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html?m=1
I guess I'll start AB testing some prompts to breakdown problems and tool selections.
If you have any more input on particular prompts you've used, I'd be grateful.
Edit 2: https://www.youtube.com/watch?v=XV1RXLPIVlw&ab_channel=code_your_own_AI It can't get clearer than this. great video
BeautifulLazy5257 t1_jds5lt4 wrote
Reply to comment by ghostfaceschiller in [D] GPT4 and coding problems by enryu42
How does ReAct work. Is it just a type of prompt engineering that directs the model to choose between a few tool descriptions?
Is it a type of sentiment analysis that chooses?
How can I recreate ReAct-iveness from scratch? What does the workflow look like
BeautifulLazy5257 t1_jdsyfbc wrote
Reply to [D] Build a ChatGPT from zero by manuelfraile
I'd start by going through the Hugging Facd courseware.
You'll learn in the first chapter of their courses that it is just better for people to fine-tune pre-trained models. That's what they are there for.
It costs a lot and produces a lot of energy and heat waste to train a LLM from scratch.