Viewing a single comment thread. View all comments

beingsubmitted t1_jecici1 wrote

The algorithm is barely IP, and the data is the bigger part of it's success.

ChatGPT is a reinforcement learning tuned transformer. The ideas and architecture it's built on aren't proprietary. The specific parameters are, but that's not actually that important. The size and number of layers, for example. Most people in ai can make some assumptions. Probably ReLU, probably Adam, etc. Then there are different knobs you can twiddle and with some trial and error you dial it in.

The size and quality of your training data is way more important, and in the case of chatgpt, so is your compute power. Lots of people can design a system that big, it's as easy as it is to come up with big numbers, but training it takes a ton of compute power, which costs money, which is why just anyone hasn't already done it if it's so easy.

It should also be said that GPT is a bit of a surprise success. Before models this size, it was a big risk. You're gonna spend millions to train a model, and you won't know until it's done how good it will be.

Most advancements in AI are open source and public. Those all help advance the field, but at the same time, it's also about taking a bit of a risk, and waiting to see how it pans out before taking the next risk.

Also, there's transfer learning. If you spend a hundred million training a model, I can use your trained model and a fraction of the money to make my own .

It's like if you laboriously took painstaking measurements to figure out an exact kilogram and craft a 1kg weight. You didn't invent the kilogram, difficult as it was to make it. If I use yours to make my own, I'm not infringing on your IP.

1