Submitted by Balance- t3_124eyso in MachineLearning
Thorusss t1_je1z0ib wrote
Reply to comment by jrkirby in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
>Then they spent 20K$+ compute on training.
Your estimate is a few magnitudes too low
AuspiciousApple t1_je2aij3 wrote
Idk, thousands of GPUs going brrrr for months, how much can it cost?
$10?
jrkirby t1_je2f63r wrote
2 million dollars or 20 million dollars is greater than 20 thousand. And it makes the main thesis more salient - the more money you've spent training, the less willing you'll be to retrain the entire model from scratch just to run some benchmarks the "proper" way.
Viewing a single comment thread. View all comments