Viewing a single comment thread. View all comments

ObjectManagerManager t1_j60y1rn wrote

OpenAI's LLM is special because it's open to the public. That's it. Other tech companies' internal LLMs are likely better. Google has a whole database of billions of websites and indexes directly at their disposal; I'm quite confident that they can outperform ChatGPT with ease. If Google was really afraid of ChatGPT running them out of business, they'd just release a public API for their own, better model. And they have a monopoly over the internet in terms of raw data and R&D; it would be virtually impossible for anyone else to compete.

Besides that, the whole "Google killer" thing is overreactive, IMO. The public api for ChatGPT doesn't retrain or even prompt-condition on new public internet data. So if you ask it about recent news, it'll spit out utter garbage. An internal version reportedly does seek out and retrain on new public internet data. But how does it find that data? With a neat tool that constantly crawls the web and builds large, efficient databases and indexes. Oh yeah---that's called a search engine.

So even if end users start using LLMs as a substitute for search engines (which is generally not happening at the moment, and it seems unlikely to be a concern in the age of GPT-3, despite what many people believe), most LLM queries will likely be forwarded to some search engine or another for prompt conditioning. Search engines will not die---they'll just have to adapt to be useful for LLM prompt conditioning in addition to being useful to end users.

17

lucidrage t1_j61u7zt wrote

>that's called a
>
>search engine
>
>.

like bing? :D

Google isn't known to develop and keep new products. When that google engineer leaked that "sentient AI" model, why didn't google beat the news by releasing a google-gpt with search engine capabilities?

With their 150k engineers, I doubt they lack the resources to build a user-friendly version of their LLM so how come they've been sitting on their hands the whole time?

3

binheap t1_j61v2f2 wrote

If you believe them, model safety is why there isn't a general public release. LLMs (including chatGPT) tend to be bad at factual accuracy and can easily hallucinate. It's not obvious that you can work LLMs into a product where accuracy matters a lot. It might hurt brand image in ways that Google could not tolerate but OpenAI can tolerate.

4

visarga t1_j6bz9e7 wrote

Model security is the security of Google's revenues if they release the model. chatGPT is very insecure for their ad clicks, it will crash their income. /s

1