Submitted by granddaddy t3_zjf45w in MachineLearning
Is there a way to get around GPT-3's 4k token limit?
Companies like Spellbook appear to have found a solution, with some people speculating what they have done on Twitter - e.g., summarizing the original document, looping in 4k chunks until the right answer is produced, etc.
I suspect multiple solutions have been applied.
I'd be curious if you have any ideas!
Relevant Tweet: https://twitter.com/AlphaMinus2/status/1600319547348639744
visarga t1_izvv9xi wrote
I'd be interested in knowing, too. I want to parse the HTML of a page and identify what actions are possible, such as modifying text in an input or clicking a button. But web pages often go over 30K tokens so there's no way to fit them. HTML can be extremely verbose.