Viewing a single comment thread. View all comments

shmoculus t1_j36xwy5 wrote

You know, it's always the details that get in the way, namely big data to train the models (scraping, storage, cleaning), big compute to train the models (read $$$,$$$), just endless boiler plate engineering work to get products up and running, then ongoing costs, challenges in running large models locally, continuous improvement etc.

Now having said all that, you can participate in LAION's OpenAssistant, have at it friend :)

7

gangstasadvocate t1_j36zetv wrote

Why do these large models have to be run locally? Why can’t we make a decentralized thing with many computers on it?

2

metal079 t1_j3803y1 wrote

That exists, it's called Petals

2

gangstasadvocate t1_j38bpna wrote

Well then I think we should be making that connection more than we’ve been doing. Now just have to trick chat GPT into telling a very detailed story about how itself was developed including the sourcecode. I started getting somewhere with it when I was asking, how do you make a language model, and then show me examples of whatever algorithms it was talking about. But unfortunately I’m not built like that or I’m just too lazy to learn coding properly so who knows but I think it’s possible then we decentralize it and get all the computers working to improve it

1

shmoculus t1_j394rjg wrote

You can read some papers on the underlying methods or have a look at the OpenAssistant source code, it should give you some idea

1

gangstasadvocate t1_j3954rv wrote

I’m good I’m sure I’ll only get marginally better of an idea from reading that by my capabilities of understanding such things. But yeah. People who are more capable than I should be.

1