Viewing a single comment thread. View all comments

smokingthatosamapack t1_jed6l4a wrote

the primary concern is training it to improve it. Sure fine tuning can be done but to make a substantial AI with significant changes and a unique model is its own feat that needs lots of funding.

4

Scarlet_pot2 OP t1_jed7tts wrote

Fine-tuning isn't the problem.. if you look at the alpaca paper, they fine tuned the LLaMA 7B model on gpt-3 and achieved gpt-3 results with only a few hundred dollars. The real costs are the base training of the model, which can be very expensive. Also having the amount of compute to run it after is an issue too.

Both problems could be helped if there was a free online system to donate compute and anyone was allowed to use it

1

smokingthatosamapack t1_jedmdli wrote

Yeah I see what you mean and it could happen but there's no such thing as a free lunch and even if there was a system it would probably pale in comparison to paid solutions for compute

1