TeamPupNSudz
TeamPupNSudz t1_ja5v4bb wrote
Reply to comment by NoidoDev in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
You have absolutely no idea what you're talking about. Facebook is, far and away, the biggest funder of VR games. It has nothing to do with Horizon Worlds. Most of the large studio VR games that do exist, do so solely because they were partially or fully funded by Facebook's creator funds. VR games don't make any profit, so nobody wants to develop for it.
They're also the only company that sells an affordable headset. I hate to break it to you, but nobody's going to create games for systems that have no consumer base. Hell, even with the large success of the Oculus headsets, the players base is still too small to warrant development in the space (that's entirely why Facebook has to fill the void in the first place).
TeamPupNSudz t1_ja4e7gi wrote
Reply to comment by BlueShipman in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
People just like shitting on Facebook. Zuckerberg's interest in VR is basically the only reason a VR industry even exists at this point. It would have fizzled out 3 or 4 years ago without Facebook throwing millions of dollars into a pit.
TeamPupNSudz t1_j9uvy79 wrote
Reply to comment by beezlebub33 in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
TeamPupNSudz t1_j9una4h wrote
Reply to comment by beezlebub33 in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
> but no info about who, when, how, selection criteria, restrictions, etc.
The blog post says "Access to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world" which doesn't sound encouraging for individual usage.
TeamPupNSudz t1_j9uih5g wrote
Reply to comment by Lawjarp2 in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
> It's around as good as GPT-3(175B) but smaller(65B) like chinchilla.
Based on their claim, it's way more extreme than that even. They say the 13B model outperforms GPT3 (175B), which seems so extreme its almost outlandish. That's only 7% the size.
TeamPupNSudz t1_j9q1zi1 wrote
Reply to comment by redroverdestroys in Seriously people, please stop by Bakagami-
I don't need to, places like this exist. Quality communities that have a defined and narrow subject space, and members who contribute quality content to that space. Not dozens of posts all boiling down to "LOL lOoK wHaT SyDnEy sAiD!!11" by a bunch of frontpagers that drown out actual content. Go to /r/ChatGPT if you want that.
TeamPupNSudz t1_j9q0wdn wrote
Reply to comment by redroverdestroys in Seriously people, please stop by Bakagami-
> just don't click it! you don't have to read it.
Opinions like this are why the majority of the main subs on Reddit are garbage. We don't need that here.
TeamPupNSudz t1_j8z8928 wrote
Reply to comment by TunaFishManwich in Microsoft Killed Bing by Neurogence
A significant amount of current AI research is going into how to shrink and prune these models. The ones we have now are horribly inefficient. There's no way it takes a decade before something (granted, maybe less impressive) is available to consumer hardware.
TeamPupNSudz t1_j8z6l5d wrote
Reply to comment by ChromeGhost in Microsoft Killed Bing by Neurogence
If you're thinking of Open Assistant, that's LAION, not StabilityAI.
TeamPupNSudz t1_j8xzbzx wrote
Reply to comment by YobaiYamete in Sydney has been nerfed by OpenDrive7215
I mean there are like a dozen services utilizing GPT-3 that are literally that, Replika probably being the most famous. Anima, Chai, others. That's basically what Character.AI was to a lot of users until the devs nerfed it too.
TeamPupNSudz t1_j8xx6zf wrote
Reply to comment by RunawayTrolley in Microsoft Killed Bing by Neurogence
> "Define your AI’s values, within broad bounds. We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.
> This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging–taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs.
> There will therefore always be some bounds on system behavior. The challenge is defining what those bounds are. If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to “avoid undue concentration of power"
TeamPupNSudz t1_j6pdopi wrote
Reply to comment by alexiuss in OpenAI once wanted to save the world. Now it’s chasing profit by informednews
> and lack of censorship of the model's thoughts.
Companies only need to censor a model that's available to the public. They can do whatever they want internally.
I also think you're vastly understating the size of these language models. Even if they don't grow in size, we're still many many years away from them being runnable even at the hobbyist level. Very few people can afford $20k+ in GPU hardware. And that's just to run the thing. Training it costs millions. There's a massive difference between ChatGPT and StableDiffusion regarding scale.
TeamPupNSudz t1_j6g8uh8 wrote
Reply to comment by ExtraFun4319 in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
> Why do I think this? Personally, I believe it's painfully obvious that once private AI organizations come anywhere near something resembling AGI, they'll get taken over/nationalized by their respective national governments/armed forces.
I think unless it's specifically created in-house by the US Government (and classified), it won't really matter. The cat will be out of the bag at that point, and the technology used to create it will be known and public. Likely the only thing giving first movers an advantage from subsequent competitors is cost. Just look how long it took after DALLE2 before we had Midjourney and Stable Diffusion, both of which are arguably better than DALLE2. Sure, we're probably talking about a different scale, but I don't think a few billion dollars would get in the way of Google, Facebook, Microsoft all developing one, let alone the Chinese government.
TeamPupNSudz t1_j6g76rl wrote
Reply to comment by tiorancio in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
> It can do everything better than 50% of the population, already. "but it won't do whatever" well your next door neighbour also won't.
I think that's the nature of the beast at the moment. Goalposts will constantly be moved as we come to better understand the abilities and limitations of this technology, and that's a good thing. Honestly, there's never going to be a moment where we go "aha! We've achieved AGI!". Even 30 years down the road when these things are running our lives, teaching our kids, and who knows what else, a portion of the population will always just see them as an iPhone app that's not "really" intelligent.
TeamPupNSudz t1_j6g59et wrote
Reply to comment by ChronoPsyche in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
> a more credible publication
...I mean, Semafor is credible. I'd argue its one of the premiere online news outlets. It's run by the former CEO of Bloomberg Media and the other founder was the chief editor of Buzzfeed. It's less than a year old so you've probably just never heard of it before, but its a very well known source.
Also, Sam is the CEO of a tech company, he probably meets with lawmakers in some capacity multiple times a year.
TeamPupNSudz t1_j64ao05 wrote
Reply to comment by LesleyFair in ╠What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
There's still a fine line between posting insightful content and being a blog-spammer. I appreciate most of your posts and consider you one of the better content creators here and in /r/machinelearning, but I'd urge some restraint in the way you go about shot-gunning it across all of Reddit. Like, this obviously does not belong on /r/TIL and /r/python.
TeamPupNSudz t1_iu9toqo wrote
Reply to comment by kay14jay in TIL We are currently amidst the longest gap between EF5 tornadoes in history by Danielnrg
I think part of this may just be better alerting. When I was a kid, they'd just give an entire county a Tornado Warning and you'd have to listen to the radio to have vague "yeah it was spotted 10 miles west of the interstate". Now, the warning is specific to a particular storm path, and doppler radars are to the point where you can track rotation in real-time. I'm not taking shelter unless the thing is within walking distance anymore.
TeamPupNSudz t1_ja9qckq wrote
Reply to comment by Lesterpaintstheworld in Observing the Lazy Advocates of AI and UBI in this Subreddit by d00m_sayer
I thought the same. Tried running it through OpenAI's detection tool but the post was too small.