Comments
kaenneth t1_jed7exg wrote
wordholes t1_jed7nco wrote
The future of AI: https://www.youtube.com/watch?v=QEzhxP-pdos
z57 t1_jedfhgf wrote
Wasn't Stanfords Alpaca trained using GPT?
Yes I think it was: Researchers train a language model from Meta with text generated by OpenAI's GPT-3.5 for less than $600
Orqee t1_jedubqo wrote
It’s called meta probabilistic recursion. Because I just name it.
[deleted] t1_jedeb60 wrote
[deleted]
autotldr t1_jed7kdh wrote
This is the best tl;dr I could make, original reduced by 54%. (I'm a bot)
> The Information's report also contains the potentially staggering thirdhand allegation that Google stooped so low as to train Bard using data from OpenAI's ChatGPT, scraped from a website called ShareGPT. A former Google AI researcher reportedly spoke out against using that data, according to the publication.
> According to The Information's reporting, a Google AI engineer named Jacob Devlin left Google to immediately join its rival OpenAI after attempting to warn Google not to use that ChatGPT data because it would violate OpenAI's terms of service, and that its answers would look too similar.
> Update March 30th, 2:02PM ET: Google would not answer a follow-up question about whether it had previously used ChatGPT data form Bard, only that Bard "Isn't trained on data from ChatGPT or ShareGPT.".
Extended Summary | FAQ | Feedback | Top keywords: Google^#1 data^#2 Bard^#3 ChatGPT^#4 train^#5
clyro_b t1_jedps82 wrote
Ha that's funny, Google suspended my account last week from scraping data from Google
Orqee t1_jeduhnl wrote
Funny because their business model is based on scraping data from other websites. But only if you are worthy.
[deleted] t1_jedxyr9 wrote
[deleted]
frosthowler t1_jee0y0f wrote
The terms of service doesn't matter in the context of anti-competitive practices; if scraping becomes a key requirement for the development of certain services, Google can undercut all of its competitors by using its own system.
This may seem 'fair game' to you, but it's not, it's anti-competitive, and for the same reason the Supreme Court ruled against Microsoft with regards to the competitive advantage it had with Internet Explorer before Firefox and Chrome came around.
https://en.wikipedia.org/wiki/United_States_v._Microsoft_Corp.
And that wasn't about forbidding--that was about merely inconveniencing.
Edit: It says it was 'partially overturned', that doesn't refer to the ruling that Microsoft was doing something illegal, it was referring to the order to break up Microsoft into two companies. That part only was overruled.
The landmark ruling resulted in the ability to develop browsers like Firefox and Chrome through Microsoft being forced to open and document its APIs, which crushed Internet Explorer.
[deleted] t1_jee24w9 wrote
[deleted]
ChuckVader t1_jee6v8e wrote
I did!
wordholes t1_jef7lej wrote
You broke the first rules;
- be rich, be powerful
- peasants get shown the door
Vix_Sparda t1_jedqpzh wrote
Translation. Google lied.
Orqee t1_jedu7j1 wrote
We totally didn’t, also whaaaaaat?
[deleted] t1_jedf2g7 wrote
[removed]
[deleted] t1_jef2phx wrote
[deleted]
ishmal t1_jee8dwc wrote
I can totally see one or more developers doing somthing like this if they need to develop a feature and the rest of Bard isn't ready yet. They test and verify their feature, add it to Bard, then leave the model behind. So the model would only be used for development and would never be seen by end users.
wordholes t1_jed6wd9 wrote
Oh my god they're using approximate data from a probabilistic model to train another even more approximate probabilistic model.
What level of generational loss is this??