Submitted by trafalgar28 t3_106ahcr in MachineLearning
I have been working on a project with GPT-3 API for almost a month now. The only drawback of GPT-3 is that the prompt you can send to the model is capped at 4,000 tokens - where a token is roughly equivalent to ¾ of a word. Due to this, providing a large context to GPT-3 is quite difficult.
Is there any way to resolve this issue?
Advanced-Hedgehog-95 t1_j3fkzu3 wrote
There is a gpt3 subreddit. You should probably post it there too