Submitted by granddaddy t3_zjf45w in MachineLearning
rafgro t1_izyke4t wrote
I've been sliding content window, summarizing chunks, chaining summaries, summarizing chained summaries, all while guiding attention (focus on X, ignore Y). I've had also limited success with storing all summaries separately, choosing most relevant summary based on task/question, and then answering with opened relevant context window in addition to the summaries, but it was too much pain (also financial) for very small gain in my case (but I imagine in legal environment it may be much more important to get every detail right).
granddaddy OP t1_izytv9p wrote
I found this twitter thread that may hold the answer (or at least one way to do it)
- the data that needs to be fed into the model is divided into chunks
- when a user asks a question, each of these chunks (likely less than 4k tokens) is reviewed
- when there is a section of the chunk that is relevant, that section is combined with the user question
- this combined text is fed as prompt, and GPT-3 is able to answer the user's question
overall, it sounds similar to what you have done, but i wonder how much the computational load changes
there's a prebuilt openai notebook you can use to replicate it
rafgro t1_j00tamz wrote
I've found another, much cheaper approach - tokenize long text and task (client-side without costly API calls), find the highest density of task token matches in the long text, slide content window there while retaining general summary of the document, and answer from this prompt.
granddaddy OP t1_j051ykd wrote
I'm having a hard time wrapping my head around this. Do you think you could elaborate further? Do you have a github repo by chance?
rafgro t1_j05enj4 wrote
Example tokenizer: https://github.com/josephrocca/gpt-2-3-tokenizer, in the most vanilla version you could count occurrences of tokens from the question/task in the document and jump to that place, eg. if the task is about lung cancer, jump to the book chapter with most occurrences of "lung" and "cancer". It works fine enough but you can make it more robust by building a timid scoring system (eg. higher weight assigned to lung than to cancer), finding words related to task words with word2vector equivalent and looking for them with appropriate weights as well, or even splicing a few different places with high score into one prompt.
granddaddy OP t1_j05pe9j wrote
Very helpful. Appreciate the link. Is that your repo?
rafgro t1_j05v581 wrote
No, I think it's a fork of AI Dungeon's encoder.
Viewing a single comment thread. View all comments