VelveteenAmbush
VelveteenAmbush t1_jegqlm8 wrote
Reply to comment by Lemonio in A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data by jack_lafouine
They're competing with Google but Google doesn't publish a lot of text as far as I know.
I don't see how they're a competitor to Reddit.
VelveteenAmbush t1_jegos18 wrote
Reply to comment by dogegunate in Senator Warner’s RESTRICT Act Is Designed To Create The Great Firewall Of America by vriska1
> acquiring
No, just owning would be enough
Cool it with the insults, they only make you sound fragile
VelveteenAmbush t1_jegezam wrote
Reply to comment by dogegunate in Senator Warner’s RESTRICT Act Is Designed To Create The Great Firewall Of America by vriska1
If your conception of freedom and liberty means that the US would have been required to allow the USSR to own and operate CBS during the Cold War, then you're living in another universe.
VelveteenAmbush t1_jefz923 wrote
Reply to comment by atwegotsidetrekked in Senator Warner’s RESTRICT Act Is Designed To Create The Great Firewall Of America by vriska1
> Yes we had access to Soviet TV.
That isn't what I asked.
VelveteenAmbush t1_jefpifg wrote
Reply to comment by atwegotsidetrekked in Senator Warner’s RESTRICT Act Is Designed To Create The Great Firewall Of America by vriska1
The biggest issue is programming, not privacy.
Should we have allowed the USSR to operate a major television broadcasting network in the US at the height of the Cold War?
The GDPR has nothing to do with that concern.
Anyway, "citizens can decide for themselves" is not how we usually handle trade disputes. If Country X tariffs or bans our widgets, we usually respond by tariffing or banning their doohickeys. It isn't up to our citizens to decide for themselves whether to use Country X's doohickeys.
VelveteenAmbush t1_jed4ffv wrote
Reply to comment by WaitingForNormal in Senator Warner’s RESTRICT Act Is Designed To Create The Great Firewall Of America by vriska1
Not sure why reciprocating protectionism is such a bad thing. We do that in trade all the time. But in apps specifically, China can ban all of ours but we can't ban theirs?
VelveteenAmbush t1_jecx9ar wrote
Reply to comment by Lemonio in A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data by jack_lafouine
Like, Google's data? Or which OpenAI competitor are you thinking about?
VelveteenAmbush t1_jec59yj wrote
Reply to comment by gurenkagurenda in A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data by jack_lafouine
> I don't see why this would be a violation of the TOS though.
It's this section:
> (c) Restrictions. You may not ... (iii) use output from the Services to develop models that compete with OpenAI;
VelveteenAmbush t1_jdsjab4 wrote
Reply to comment by artsybashev in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Also an LLM to read all of the tldrs and tell me which of them I should pay attention to.
VelveteenAmbush t1_jd0j2yv wrote
Reply to comment by Carrasco_Santo in [P] OpenAssistant is now live on reddit (Open Source ChatGPT alternative) by pixiegirl417
Assuming that the best corporate models don't have further improvements in architecture and methodology that haven't been shared publicly...
VelveteenAmbush t1_jcilkfj wrote
Reply to comment by Cold-Advance-5118 in Pornhub owner MindGeek sold to Ottawa private equity firm by marketrent
It's one of those things where if it's right there in the name, you start to wonder if they're protesting too much. Like Truth Social...
VelveteenAmbush t1_jce5y2v wrote
Reply to comment by I_will_delete_myself in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
If only it was like paying philosophers. More often it is like paying anti-corporate activists to sit inside the corporation and cause trouble. There's no incentive for them to stay targeted at things that are actually unethical -- nor even any agreement on what those things are. So they have a structural incentive to complain and block, because that is how they demonstrate impact and accrue power.
VelveteenAmbush t1_jcdxc8v wrote
Reply to comment by Smallpaul in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Maybe you're onto something.
I guess the trick is coming up with foundational patents that can't be traced back to a large tech company that would worry about being countersued. Like if you make these inventions at Google and then Google contributes them to the GPL-esque patent enforcer entity, and then that entity starts suing other tech co's, you can bet that those tech co's will start asserting their patents against Google, and Google (anticipating that) likely wouldn't be willing to contribute the patents in the first place.
Also patent litigation is really expensive, and you have to prove damages.
But maybe I'm just reaching to find problems at this point. It's not a crazy idea.
VelveteenAmbush t1_jcd760v wrote
Reply to comment by professorlust in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
They're purposefully withholding the information you'd need to use their results in research. This proposed research boycott is sort of a "you can't fire me, I quit" response.
VelveteenAmbush t1_jcd6opg wrote
Reply to comment by twilight-actual in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
You could patent your algorithm and offer some sort of GPL-like patent license, but no one respects software patents anyway (for good reason IMO) and you'd be viewed as a patent troll if you tried to sue to enforce it.
GPL itself is a copyright license and does you no good if OpenAI is using your ideas but not your code. (Plus you'd actually want AGPL to force code release for an API-gated service, but that's a separate issue.)
VelveteenAmbush t1_jcd6bkq wrote
Reply to comment by sobe86 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I think Hassabis' goal is to build a synthetic god and reshape the cosmos, and open research isn't necessary conducive to that except as needed to keep researchers motivated and engaged.
VelveteenAmbush t1_jccksp9 wrote
Reply to comment by the_mighty_skeetadon in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Right, Google's use of this whole field has been limited to optimizing existing products. As far as I know, after all their billions in investment, it hasn't driven the launch of a single new product. And the viscerally exciting stuff -- what we're calling "generative AI" these days -- never saw the light of day from inside Google in any form except arguably Gmail suggested replies and occasional sentence completion suggestions.
> it's a different mode of launching with higher risks, many of which have different risk profiles for Google-scale big tech than it does for OpenAI
This is textbook innovator's dilemma. I largely agree with the summary but think basically the whole job of Google's leadership boils down to two things: (1) keep the good times rolling, but (2) stay nimble and avoid getting disrupted by the next thing. And on the second point, they failed... or at least they're a lot closer to failure than they should be.
> Example: ChatGPT would tell you how to cook meth when it first came out, and people loved it. Google got a tiny fact about JWST semi-wrong in one tiny sub-bullet of a Bard example, got widely panned and lost $100B+ in market value.
Common narrative but I think the real reason Google's market cap tanked at the Bard announcement is due to two other things: (1) they showed their hand, and it turns out they don't have a miraculous ChatGPT-killer up their sleeves after all, and (2) the cost structure of LLM-driven search results are much worse than classical search tech, so Google is going to be less profitable in that world.
Tech journalists love to freak out about everything, including LLM hallucinations, bias, toxic output, etc., because tech journalists get paid based on engagement -- but I absolutely don't believe that stuff actually matters, and OpenAI's success is proving it. Google's mistake was putting too much stock in the noise that tech journalists create.
VelveteenAmbush t1_jccizz1 wrote
Reply to comment by Nhabls in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
It has nothing to do with semantics, it's basic corporate strategy.
VelveteenAmbush t1_jcc4mvf wrote
Reply to comment by Nhabls in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Transformers aren't products, they're technology. Search, Maps, Ads, Translation, etc. -- those were the products. Those products had their own business models and competitive moats that had nothing to do with the technical details of the transformer.
Whereas GPT-4 is the product. Access to it is what OpenAI is selling, and its proprietary technology is the only thing that prevents others from commoditizing it. They'd be crazy to open up those secrets.
VelveteenAmbush t1_jcbw6mx wrote
Reply to comment by MysteryInc152 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
DeepMind's leaders would love to hoard their secrets. The reason they don't is that it would make them a dead end for the careers of their research scientists -- because aside from the occasional public spectacle (AlphaGo vs. Lee Sedol) nothing would ever see the light of day. If they stopped publishing, they'd hemorrhage talent and die.
OpenAI doesn't have this dilemma because they actually commercialize their cutting-edge research. Commercializing its research makes its capabilities apparent to everyone, and being involved in its creation advances your career even without a paper on Arxiv.
VelveteenAmbush t1_jcbv79q wrote
Reply to comment by ComprehensiveBoss815 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
The fact that they make their stuff available commercially via API is enough to make them 100x more "open" than the big tech companies.
VelveteenAmbush t1_jcbv0rs wrote
Reply to comment by Nhabls in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
GPT-4 is an actual commercial product though. AlphaGo was just a research project. No sane company is going to treat the proprietary technological innovations at the core of their commercial strategy as an intellectual commons. It's like asking them to give away the keys to the kingdom.
VelveteenAmbush t1_jcbu8nr wrote
Reply to comment by ScientiaEtVeritas in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
> While they also potentially don't release every model (see Google's PaLM, LaMDA) or only with non-commercial licenses after request (see Meta's OPT, LLaMA), they are at least very transparent when it comes to ideas, architectures, trainings, and so on.
They do this because they don't ship. If you're a research scientist or ML research engineer, publication is the only way to advance your career at a company like that. Nothing else would ever see the light of day. It's basically a better funded version of academia, because it doesn't seem to be set up to actually create and ship products.
Whereas if you can say "worked at OpenAI from 2018-2023, team of 5 researchers that built GPT-4 architecture" or whatever, that speaks for itself. The products you release and the role you had on the teams that built them are enough to build a resume -- and probably a more valuable resume at that.
VelveteenAmbush t1_j9w6cob wrote
Reply to comment by TheFuzziestDumpling in US says Google routinely destroyed evidence and lied about use of auto-delete by OutlandishnessOk2452
He thought they were helping individual criminals get away with their crimes, which is something that a strangely high proportion of Reddit seems to favor.
VelveteenAmbush t1_jegt3yf wrote
Reply to comment by bigflamingtaco in Senator Warner’s RESTRICT Act Is Designed To Create The Great Firewall Of America by vriska1
Is that your actual objection or just an excuse? If there were a clean bill that banned TikTok but didn't do whatever other bad things you're worried about, you'd support it?