Viewing a single comment thread. View all comments

dojoteef t1_j0275on wrote

While there is a field of research investigating federated learning which might one day allow for an ML@Home type project, as it stands the current algorithms require too much memory, computation, and bandwidth for training the very large models like GPT3.

I'm hopeful that an improved approach will be devised that mitigates these issue (in fact I have some ideas I'm considering for my next research project), but as it stands these issues render a real ML@Home type project currently infeasible.

1

genuinelySurprised OP t1_j02iugf wrote

I figured there was some technical catch related to scaling. It's a pity there's no way (yet) to put together a truly-open competitor to GPT3 and whatever comes after it.

1