Comments

You must log in or register to comment.

Sashinii t1_irl0jvb wrote

Great. Open source is always the way to go. The so-called "abuse" of AI-related applications that journalists fearmonger about is already possible with existing technology and rarely if ever happens because most people are decent and focus on non-harmful uses. Also, these AI-related applications are fiction, often times animated, so be skeptical of doomsday predictions.

That's not to say ignore the potential downsides, of course it's good to make technology as safe as possible, but preventing the general public from accessing creative tools doesn't help anyone and the reason it happens at all is for corporations to financially benefit, but I don't care about corporations, I care about people and open source models give power to the people.

36

ThatInternetGuy t1_irmasvp wrote

Yep, humans have always been fabricating fake news and photoshopped photos, and suddenly AI is dangerous because it can do the same.

8

tronathan t1_irmdqsj wrote

While I like the idea that all tech should be open-source, I'm not sure I agree. You say that the tech is already available, which I think is true to an extent, but having the knowledge and tooling be widespread is different than just having it available to those with deep pockets / who put great effort in / etc. The same could be said of nuclear weapsons; would you want to be able to download a nuke?

Another way to look at the issue of "we can do this today, but no one is using it for evil" argument (i know, thats an oversimplification of what you said) - is to think about the analogy, "Which chain is stronger? One with 10 links or 10,000,000 links?" Shorter chains are sronger because they hae fewer points of failure and there is always variation in the strength of each link.

Or you can take the example of spam email - If email was expensive to send, there wouldn't be much spam, because even though it'd /possible/ to send vast amounts of email, it wouldn't be cheap enough to do it at a level that would disrupt society (if society was your inbox). But given how cheap it is, the incentives are such that it makes financial sense, so people do it.

Market forces ccomina getcha, now with AI!

4

Shelfrock77 t1_irl1eel wrote

Yes exactly!, let’s open source boston dynamics robots to everyone in the world ❤️ What could possibly go wrong ?

−26

Concheria t1_irl2qi1 wrote

Nothing, because Boston Dynamics' robots are barely able to understand the world around them, and are extremely costly to even produce in the first place. The videos that they show on the Internet are pre-programmed animations and cherry picked out of a ton of attempts.

It's good research, but we'd be so much more advanced in robotics if this kind of research was open just from the nature of it.

20

Shelfrock77 t1_irl2w3c wrote

If you don’t think humans are getting human trafficked by robots in the future, you are seriously fucking delusional.

−21

Concheria t1_irl311o wrote

By Boston Dynamics' Atlas? Nah.

If you're concerned about some potential danger that might only be possible like 20 years from now then we should probably stop the entire field of robotics research.

17

Shelfrock77 t1_irl3ups wrote

There is going to be a mass extermination of humans over the next decades. It’s going to be biblical. The rapture is upon us. Covid-19 crystallizes in the brain and transfers data into data centers. The gov already has access to ASI and are holding hands with ET’s to successfully merge the species. It’s not a coincidence that Bill Gates is in charge of biological viruses and computer viruses, wake up. All these elites aka super rich people are aware of that depopulation will occur when the machine master race arrives to Earth. The genie is out of the bottle🧞‍♂️, there is no going back, this is the new normal, VR will replace reality, we will call VR “real life”. The world ended in 2020 according the mayans, not 2012 because the Mayans had an extra month for each year which rounds the end of the world to 2020. Covid-19 was the first move of merging the species. Step 1: Start a world pandemic and merge the species with a virus 🦠.

−33

CY-B3AR t1_irl7b53 wrote

@mods, can we get this shit out of here? This is straight-up QAnon type lunacy

22

Concheria t1_irl43t7 wrote

Ah, I see the problem. You're insane. Nevermind, then.

11

Shelfrock77 t1_irl4huc wrote

Spirituality/hedonism is the new religon. A brave new world is upon us. DMT will be like drinking coffee in the morning to us since we spend most of our time dreaming in VR. The programming will be successful to keep all humans from leaving the planet without a brainchip. We must keep track of the hive because we don’t want any sell out humans/AI’s giving our position away to malicious ETs and killing us and altering our evolution.

−4

expelten t1_irnwrq2 wrote

You're not well and you need to see a psychiatrist, I'm serious. The correct treatment could make a big difference and help you. The problem is that right now you don't realize you need help because you're delusional, sadly when you don't have a family that cares about you it happens often. If you stay like this you could put your life at risk or simply not be able to keep up a job anymore if it's not already the case.

3

Shelfrock77 t1_irnwyjg wrote

I’ll wait a few yrs to get a brainchip so the pych wizards can reprogram my brain.

0

starstruckmon t1_irmaepf wrote

There's nothing a Boston Dynamics Robot can do that couldn't be done by a cheap drone and a bomb.

2

TemetN t1_irlefv9 wrote

Models it appears, plural. I'd actually be more interested in an open source foundational LLM from them honestly. Unsure if this implies that or not.

21

Akimbo333 OP t1_irlf7lg wrote

Yeah. Been waiting on GPT4 for like ever

15

starstruckmon t1_irmabdw wrote

We'd have already had a open source GPT competitor in Bloom ( same amount of parameters as GPT3 and open source / open model ) if they didn't decide to virtue signal. They trained it on too many diverse languages and sources and the AI came out an idiot ( significantly underperforms GPT3 in almost all metrics ).

7

AsthmaBeyondBorders t1_irnnljm wrote

Virtue signaling or maybe they just thought it would work out, or maybe the objective was to be decent in many languages but not as good as gpt-3 and this wasn't really a surprise to anyone because they wanted to research how the same model may behave differently in different languages with different grammar rules? Or maybe it was never meant to be a final product and they needed to test it before deciding what to do on their next models? Or maybe they thought coming up with a copy of another AI would be more irrelevant than coming up with an AI that has something different to offer? I think virtue signaling is at the bottom of the list of possibilities here dude.

6

starstruckmon t1_irnsty3 wrote

Depends on your definition of it. There's definitely a bit of this

>Or maybe they thought coming up with a copy of another AI would be more irrelevant than coming up with an AI that has something different to offer?

but another reason was that it's easy to get funding ( in the form of compute in this case ) from public institutions when there's a virtue signalling angle.

2

Akimbo333 OP t1_irmjuer wrote

Oh wow lol! Though out of curiosity how did the different languages mess it up?

2

starstruckmon t1_irmlck4 wrote

While this is the current consensus ( they went too broad ) , it's still a guess. These are all black boxes so we can't say for certain.

Basically, a "jack of all trades , master of none" type issue. As we all know from the Chinchilla paper, current models are already severely undertrained data-wise. They went even further by having even less data per language , even if the total dataset was comparable to GPT3.

6

Akimbo333 OP t1_irmo2a2 wrote

Oh damn! Oh hey you should look into Sparrow it is pretty good at being a jack of all trades lol!

https://youtu.be/dt9rv-Pf0b0

3

starstruckmon t1_irmt5ng wrote

It's not open to anyone. He's putting on a show by recreating examples from their paper.

It's basically a fine-tuned variation of Chinchilla ( smaller than GPT3 with just 1/3rd the parameters but performs better since it was trained adequately data-wise ) to be more aligned, like how they modded GPT3 into the current InstructGPT variation.

It's not really a jack of all trades in that sense since it was trained on a dataset simmilar to GPT3 of mostly English text.

Most of the new models we'll be seeing ( like the topic of this post ) will definitely be following this path.

3

Kaarssteun t1_irmax6r wrote

FYI, Stability already has multiple open source LLMs, GPT-J and GPT-NeoX

6

Akimbo333 OP t1_irmk7yf wrote

Yeah but GPTJ only has 6b parameters and Neo only a measly 2.7b A truly effective language model needs atleast 100b parameters

1

Scientific_Thinking t1_irmbvja wrote

I'm a big fan of stability AI as this point. Giving access to the public is such a huge gift!

3