Submitted by granddaddy t3_zjf45w in MachineLearning
granddaddy OP t1_izyu345 wrote
Reply to comment by Acceptable-Cress-374 in [D] Getting around GPT-3's 4k token limit? by granddaddy
this sounds like the right answer (and sth i need to keep in mind as well)
just as an FYI, this is one answer i found from a twitter thread
- the data that needs to be fed into the model is divided into chunks
- when a user asks a question, each of these chunks (likely less than 4k tokens) is reviewed
- when there is a section of the chunk that is relevant, that section is combined with the user question
- this combined text is fed as prompt, and GPT-3 is able to answer the user's question
there's a prebuilt openai notebook you can use to replicate it
Viewing a single comment thread. View all comments