Submitted by balthierwings t3_122q3h7 in MachineLearning
ThirdMover t1_jdrzd7f wrote
Reply to comment by rya794 in [P] Using ChatGPT plugins with LLaMA by balthierwings
That depends on how well they will be able to keep their moat. There is a lot of hunger for running LLMs on your own - if not hardware than at least in software environments you control. People want to see what makes them tick rather than trust "Open"AIs black boxes.
Yeah they have a performance lead but time will tell how well they can stay ahead of the rest of the field trying to catch up.
rya794 t1_jds0xqs wrote
I don’t think so, I suspect my argument holds no matter who is running the most advanced LLM. The market leader will never have an incentive to open source their “app store”.
The only way this breaks down is if by some miracle, an open source model takes and maintains the lead.
ThirdMover t1_jds1kid wrote
The lead may not always be obvious and the trade off from transparency may be worth it. LLMs (or rather "foundation models") will continue to capture more and more areas of competence. If I want one that - for example - forms the front end chat bot to a store I have so that people can ask for product explanations, do I need then the 500 IQ GPT-7 that won two Nobel prizes last year?
I think it's most likely that there will always be black box huge models that form the peak of what is possible with machine intelligence but what people use and interact with in practice will simply be "good enough" smaller and open source models.
Dwanyelle t1_jds42hs wrote
Exactly. It's not "what's the most impressive model possible?". It's "what's the most impressive model possible that can run on $1000 or less of hardware?"
rya794 t1_jdsev38 wrote
Yea, I agree with this, but I still don’t see what advantage the state of the art providers receive by adhering to an open protocol. If anything doing so would (on the margin) push users towards open source models when they might have been willing to pay for a more advanced model just to access certain plugins.
That being said, I do think that a standardized approach to a plugin ecosystem will arise. I just think it’s silly to expect any of the foundation model providers to participate.
alexmin93 t1_jduoxj4 wrote
The problem is not the model but the training dataset. That's the thing that costs millions for OpenAI. Alpacca is rather poorly performing mostly due to the fact its trained on gtp 3 generated texts
sweatierorc t1_jdszzh4 wrote
Firefox did, they only lost to another "open-source" project
rya794 t1_jdt0dxe wrote
That’s a really good counter argument. You may have moved me over to the other side.
AngusDHelloWorld t1_jdtq232 wrote
And not everyone care about open source. At least for the non technical people, as long as they can get things done, it’s good enough for them.
beryugyo619 t1_jds9oz8 wrote
Yeah the only advantage they have seems just couples of <500GB model weights in their hand, solely by being the first mover, without much else to back it up.
Viewing a single comment thread. View all comments