Submitted by koyo4ever t3_10ugxmc in deeplearning
Is it technically possible to train some model using a lot of personal computers like a cluster.
Eg: an Algorithm to train tiny parts of some model using personal computer of volunteers. Like a community that makes your gpu capacity available, even if it's little.
The idea is train tiny parts of a model, with a lot of volunteers, then bring it together to make some powerful deepmind.
Can this model beat a lot of money spent in models like GPT-3?
Appropriate_Ant_4629 t1_j7clc8s wrote
The LAION project ( https://laion.ai/ ) is probably the closest thing to this.
They're looking for volunteers to help work on their fully F/OSS ChatGPT successor now. A video describing the help they need can be found here.
They have a great track record on similar scale projects. They've partnered with /r/datahoarders and volunteers on creation of training sets including their 5.8 billion image/text-pair dataset that they used to train a better version of CLIP.
Their actual training of models tends to be done on some of the larger European supercomputers, though. If I recall correctly, their CLIP-derivative was trained with time donated on JUWELS. Too hard to split up such jobs into average-laptop-sized tasks.