Submitted by rretaemer1 t3_10yz6uq in Futurology

Just curious, all the news about Microsoft and Google lately has made me wonder if we're approaching a moment where AI can be integrated into open source technology, and therefore expand the reach of open source usability for the every-person. There's obviously already a thriving open source community for almost everything someone could think of, but often the open source version of things are just a little behind what proprietary versions that are helmed by a company can do. Could AI integration get us to a moment where open source technology is on par or even superior to proprietary versions of software?

I know it's a somewhat vague question, but just curious to hear what people may think.

1

Comments

You must log in or register to comment.

Bewaretheicespiders t1_j80mm5w wrote

I work in AI. Pretty much all of AI is open sourced and open research too. Google's deep learning framework, Tensorflow, is free and open source. Same with the (IMO superior) Meta's Torch. Its in large part because these two framework are open source that AI is currently thriving. They all publish their innovations too.

But to train large AI you need a lot of data. In a scale that most people can't comprehend. And the network and compute capability to go along.

11

rretaemer1 OP t1_j80ohbl wrote

I wasn't aware that AI was already open source to a large degree. Thank you for your response. How far away from an AI program that can maintain itself are we? i.e. can update itself and train itself without intervention. I apologize for any ignorance on my part. I'm just a normal that is fascinated.

3

Bewaretheicespiders t1_j80tjfj wrote

When we say AI, we dont mean AI in the way you are thinking. When we say AI, we mean a software that can get its behavior from data, instead of being programmed instruction by instruction.

It doesnt imply intelligence, not in the way you think. Those chatbots that are in the news lately, they dont do anything like "reason". They are sophisticated parrots. They are statistitical models of what are believable things you can say in certain situations. But just like a parrot doesnt understand 17th century economics when it repeats "pieces of eight!", these chatbots dont reason. They just deduced from pas conversations what are believable things to say.

So

>How far away from an AI program that can maintain itself are we?

I dont know. We dont even have "an AI program", not in the way you think. We have software that deduces from data how to perform some tasks.

6

mentive t1_j8157yq wrote

That's what the singularity wants us to think.

2

rretaemer1 OP t1_j80v04z wrote

For sure. As a normie who's just fascinated I know that I know very little about AI. I know there's nothing that could be considered "conscious" in any way in AI's current state, and a lot of it is not too far off from something like a hyper sophisticated Skyrim npc.

I know that something like that GPT can produce coding if it's asked to though, and that in some cases it's even produced some things that could serve as the basis for apps. If it's capable of producing contextual coding then i don't see how it could be too far off from doing things like "updating itself" on a software front

Thank you for your response.

1

MysteryInc152 t1_j81e986 wrote

Calling Large Language models "sophisticated parrots" is just wrong and weird lol. And it's obvious how wrong it is when you use the se tools and evaluate without any weird biases or undefinable parameters.

This for instance is simply not possible without impressive recursive understanding. https://www.engraved.blog/building-a-virtual-machine-inside/

We give neural networks data and a structure to learn that data but outside that, we don't understand how they work. What I'm saying is that we don't know what individual neurons or parameters are learning or doing. And a neural networks objective function can be deceptively simply.

How you feel about how complex "predicting the next token" can possibly be is much less relevant than the question, "What does it take to generate paragraphs of coherent text?". There are a lot of abstractions to learn in language.

The problem is that people who are telling you these models are "just parrots" are engaging in a useless philosophical question.

I've long thought the "philosophical zombie" to be a special kind of fallacy. The output and how you can interact with it is what matters not some vague notion of whether something really "feels". If you're at the point where no conceivable test can actually differentiate the two then you're engaging in a pointless philosophical debate rather than a scientific one.

"I present to you... the philosophical orange...it tastes like an orange, looks like one and really for all intents and purposes, down to the atomic level resembles one. However, unfortunately, it is not a real orange because...reasons." It's just silly when you think about it.

LLMs are insanely impressive for a number of reasons.

They emerge new abilities at scale - https://arxiv.org/abs/2206.07682

They build internal world models - https://thegradient.pub/othello/

They can be grounded to robotics -( i.e act as a robots brain) - https://say-can.github.io/, https://inner-monologue.github.io/

They can teach themselves how to use tools - https://arxiv.org/abs/2302.04761

They've developed a theory of mind - https://arxiv.org/abs/2302.02083

I'm sorry but anyone who looks at all these and says "muh parrots man. nothing more" is an idiot. And this is without getting into the nice performance gains that come with multimodality (like Visual Language models).

3

Bewaretheicespiders t1_j80vky8 wrote

>I know that something like that GPT can produce coding if it's asked to though

Programming languages are meant to be super explicit and well structured, right? So for simple procedures, problem definition to python is just a translation problem.

But most of a programmer's work is "figure out what the hell is wrong with that thing", not "write a method that invert this array"

2

rretaemer1 OP t1_j815o22 wrote

Thank you for sharing your insight.

As someone who works in AI, what's your take on all the Bing vs. Google news lately?

1

Bewaretheicespiders t1_j816mcu wrote

The thing with Google was a silly, massive overreaction. Its trivial to get any of these chatbots to say factual errors, because they are trained on massive amount of data that contains factual errors.

3

rretaemer1 OP t1_j81alg7 wrote

Do you think Microsoft is being intentional in challenging google with their confident messaging, potentially forcing Google to misstep? Or is it a happy accident for them? Or is this another "funeral for the iPhone" moment lol?

1

resdaz t1_j80n64k wrote

The architecture for these large language models are no secret. Everyone can see exactly how to implement them to the tiniest detail.

The value lies in how to train and fine tune the data. Which, tellingly, the big players are far less interested in sharing.

2

Setrict t1_j80ozpi wrote

About the only I can see open source could compete is by leveraging large numbers of volunteers to create curated data sets that aren't licensed for use in closed systems. A kind of wikipedia for AI training. Quality over quantity. Filtering out stuff like "TheNitromeFan" data that confused Chatgpt.

2

r2k-in-the-vortex t1_j80pk9o wrote

If you are thinking large language models likes of ChatGPT, then sorry, that's not going to happen in open source any time soon. Not only is training cost prohibitive, but also consumer hardware is nowhere near able to run those. They are just plain too large.

Be happy that stable diffusion was released for free. Training that cost 600k by the way.

2

Grotto-man t1_j80ro34 wrote

I'm gonna hijack this thread and ask a related question: what is currently the stumbling blocks to a helpful, intelligent and agile robot? Boston Dynamics has the agility on lock, these GPT-like programs have the understanding of human commands on lock. What's currently stopping us from combining them?

2

MysteryInc152 t1_j81b73v wrote

Well robotics still has a ways to go to replicate the flexibility and maneuverability of the human body but....nothing really. They've already been combined and to promising results. See here - https://say-can.github.io/

1

Grotto-man t1_j81dmky wrote

Damn, it feels like the "far future" is around the corner. All those movies i've seen of robots being helpful, I actually thought that would be way way down the line. But things seem to be progressing at a faster rate than imaginable.

Will quantum computers be helpful in this regard? If they crack that, will it speed the development of AI up or is it not that simple?

1

MysteryInc152 t1_j81hoqz wrote

Here's an improved version of what I just linked. https://inner-monologue.github.io/.

Can't really speak on the quantum computers bit. Don't know how helpful they would be.

2

Thebadmamajama t1_j87056q wrote

Working in a few related fields, they are already being combined to some extent. We have machine perception, where the bot can often fine objects in the world around them, and so things like pick them up and move them around. on the other end you have all these deep learning methods that can help simplify large data sets, and that helps make it easier to find things more reliably. The problem is they are all probabilistic... The machine will easily confuse objects (a dog for a loaf of bread), and then it can misjudge the world around it an unintentionally break things or hurt people.

There also practical issues, power, sensors, are all still in the early days, and largely inefficient and otherwise expensive. Most of the bots only have minutes of runtime before they need to charge again.

intelligent and helpful are tall orders given all that... combining the above is still wildly off from intelligently working side by side with a human.

I think a whole new operating system needs to be invented that sits above all this, and orchestrates things... Receiving commands without confusing intent, interacting with the world without serious mistakes, and working with objects it can reliably identified.

1

Waffles_And_News t1_j811la5 wrote

I'd love to see this. I like AI art but all of the generators require a subscription after a short time. It would be cool to make my own or see a free one out there.

2

Hungry-Sentence-6722 t1_j80h0jj wrote

I thought open AI was the company name because it is.
Maybe the hardware and storage costs are still too high? Not really sure.

1

rretaemer1 OP t1_j80ikzf wrote

Open AI is the company that develops chat GPT, it's software isn't open source though. Microsoft invested heavily into them in order to be able to use Open AI's technologies.

Edit: to the best of my knowledge

0

Bewaretheicespiders t1_j80nexk wrote

>it's software isn't open source though.

They see themselves are some sort of Brotherhood of Steel and I think its silly.

3

theironlion245 t1_j80ick6 wrote

Majority of average people don't know what open source is and don't really care as long as their stuff work. I don't know who will win the AI war, openAI, Google, deepmind, Facebook, which one it is their AI will literally be implemented in everything and most people will be fine with it.

1

rretaemer1 OP t1_j80jrlc wrote

Absolutely. It might only take one killer app hitting the market that is open source to change the landscape though, and I wonder if AI will make that more of a possibility.

1

Aggressive-Guitar-83 t1_j83okzf wrote

AI is being programmed with the biases of the creator. Teslabot is the one I would trust.

1

rogert2 t1_j80gv0f wrote

This question is not merely vague, but fatally confused.

I bet folks will engage with it, but I really doubt OP will get any satisfaction.

−2

rretaemer1 OP t1_j80i7lg wrote

Fair enough. If you or someone can think of a better way to structure the question I'd love to either make another post or see it posted in a way that's worded better. I was just curious and typed my raw thoughts out.

1