While it may seem that way correct answers are always expected but never delivered everything works within a margin of error with humans it's pretty large and not easy to fix. Also "correct" is subjective. LLMs are language models use the knowlede embedded in their wieghts combined with the context provided by the prompt to do their best. The positive thing here is that that the margin of error is actively being reduced withn LLMs and not so with however we did this before.
Ok I saw that not entirely sure what you think prompting is but its not about getting exact answers or anything like that. As I understand it (however limited) it is about bringing attention to a part of the models latent space closest to where your soluton may fall.
It's a prompt engineering library that has implementations of various papers in the space include ReAct, Pal etc. We are working on adding more. Here's a list of some of papers we are implementing. https://42papers.com/c/llm-prompting-6343
gsvclass OP t1_j9xttgi wrote
Reply to comment by cthorrez in [P] Minds - A JS library to build LLM powered backends and workflows (OpenAI & Cohere) by gsvclass
While it may seem that way correct answers are always expected but never delivered everything works within a margin of error with humans it's pretty large and not easy to fix. Also "correct" is subjective. LLMs are language models use the knowlede embedded in their wieghts combined with the context provided by the prompt to do their best. The positive thing here is that that the margin of error is actively being reduced withn LLMs and not so with however we did this before.