Submitted by besabestin t3_10lp3g4 in MachineLearning
FallUpJV t1_j5ya6t5 wrote
Reply to comment by manubfr in Few questions about scalability of chatGPT [D] by besabestin
This is something that I often read, that other LLMs are undertrained, but how come the OpenAI one is the only one not to be ? Datasets ? Computing power ?
MysteryInc152 t1_j60vz8p wrote
OpenAI's models are still undertrained as well.
Viewing a single comment thread. View all comments