Submitted by gsvclass t3_11ak97p in MachineLearning
gsvclass OP t1_j9wq69s wrote
Reply to comment by ZestyData in [P] Minds - A JS library to build LLM powered backends and workflows (OpenAI & Cohere) by gsvclass
It's a prompt engineering library that has implementations of various papers in the space include ReAct, Pal etc. We are working on adding more. Here's a list of some of papers we are implementing. https://42papers.com/c/llm-prompting-6343
cthorrez t1_j9xgmx1 wrote
may be an unpopular opinion these days but I don't think prompt engineering is a suitable topic for /r/MachineLearning
gsvclass OP t1_j9xqgf1 wrote
Why do you feel that?
cthorrez t1_j9xrguh wrote
That comment is very over the top sarcasm. You would have realized that if you had checked the source I linked.
gsvclass OP t1_j9xrqzq wrote
I updated my comment. Not sure what you mean here "You would have realized that if you had checked the source I linked"? what source
cthorrez t1_j9xrv1v wrote
The source I linked in the comment you linked and then deleted.
gsvclass OP t1_j9xsgz3 wrote
Ok I saw that not entirely sure what you think prompting is but its not about getting exact answers or anything like that. As I understand it (however limited) it is about bringing attention to a part of the models latent space closest to where your soluton may fall.
cthorrez t1_j9xstlw wrote
People are rushing to deploy LLMs in search, summarization, virtual assistants, question answering and countless other applications where correct answers are expected.
The reason they want to get to the latent space close to the answer is because they want the LLM to output the correct answer.
gsvclass OP t1_j9xttgi wrote
While it may seem that way correct answers are always expected but never delivered everything works within a margin of error with humans it's pretty large and not easy to fix. Also "correct" is subjective. LLMs are language models use the knowlede embedded in their wieghts combined with the context provided by the prompt to do their best. The positive thing here is that that the margin of error is actively being reduced withn LLMs and not so with however we did this before.
cthorrez t1_j9xrl8a wrote
I think it's not suitable because it isn't really related to the process of a machine learning anything. It seems to me to belong to the field of human computer interaction.
Viewing a single comment thread. View all comments