Submitted by Scarlet_pot2 t3_104svh6 in singularity
shmoculus t1_j36xwy5 wrote
You know, it's always the details that get in the way, namely big data to train the models (scraping, storage, cleaning), big compute to train the models (read $$$,$$$), just endless boiler plate engineering work to get products up and running, then ongoing costs, challenges in running large models locally, continuous improvement etc.
Now having said all that, you can participate in LAION's OpenAssistant, have at it friend :)
gangstasadvocate t1_j36zetv wrote
Why do these large models have to be run locally? Why can’t we make a decentralized thing with many computers on it?
metal079 t1_j3803y1 wrote
That exists, it's called Petals
gangstasadvocate t1_j38bpna wrote
Well then I think we should be making that connection more than we’ve been doing. Now just have to trick chat GPT into telling a very detailed story about how itself was developed including the sourcecode. I started getting somewhere with it when I was asking, how do you make a language model, and then show me examples of whatever algorithms it was talking about. But unfortunately I’m not built like that or I’m just too lazy to learn coding properly so who knows but I think it’s possible then we decentralize it and get all the computers working to improve it
shmoculus t1_j394rjg wrote
You can read some papers on the underlying methods or have a look at the OpenAssistant source code, it should give you some idea
gangstasadvocate t1_j3954rv wrote
I’m good I’m sure I’ll only get marginally better of an idea from reading that by my capabilities of understanding such things. But yeah. People who are more capable than I should be.
Viewing a single comment thread. View all comments