_Arsenie_Boca_ t1_j3kzllo wrote
Your laptop will not begin to suffice, not for inference and especially not for fine tuning. You would need something like an A100 GPU in a server that handles requests. And in the end, the results will be much worse than GPT-3. If you dont already have an AI infrastructure, go with an API, it will save you more than a bit of money (unless you are certain you will use it at scale long-term). If you are worried about OpenAI, there are some other companies that serve LMs.
learningmoreandmore OP t1_j3l12bf wrote
I appreciate the insight. I didn't know that it would be that expensive!
So you're saying that even if magically somehow OpenAI were to close shop, I could still just jump ship and use another API and I'll probably only need to very slightly modify the code accessing it since they should be able to handle the same prompts?
LetterRip t1_j3l42en wrote
You can use the GPT-J-6B 8bit, and can do finetuning on a single GPU with 11 GB of VRAM.
https://huggingface.co/hivemind/gpt-j-6B-8bit
You could probably do a fine tune and test fairly cheaply using google colaboratory or colaboratory pro (9.99$/month).
learningmoreandmore OP t1_j3l4cu0 wrote
Thanks! Would this be able to scale and handle more computations or is this only for personal use? I wonder why most people wouldn't be using this version if it's so efficient computation-wise
_Arsenie_Boca_ t1_j3l1x1y wrote
Pretty much, yes. I believe other APIs might use slightly worse models than OpenAI but definitely better than GPT-J.
Nmanga90 t1_j3xxi44 wrote
OpenAI is not going to close shop any time soon. Not sure if you know this, but Microsoft has been making huge investments into them, and has licensing rights to the GPT models. So Microsoft is pretty much the one who is serving the APIs, and they are right now looking into making another 10-billion-dollar investment into OpenAI.
Viewing a single comment thread. View all comments