ActuatorMaterial2846

ActuatorMaterial2846 t1_je8ik1t wrote

It's actually quite technical, but essentially, the transformer architecture helps each part of the sentence “talk” to all the other parts at the same time. This way, each part can understand what the whole sentence is about and what it means.

Here is the paper that imo changed the world 6 years ago and is the reason for the current state of AI.

https://arxiv.org/abs/1706.03762

If it goes over your head (it did for me), ask bing or chatgpt to summarise it for you. It helped me get my head around this stuff, as I'm in no way an expert nor do I study this field.

11

ActuatorMaterial2846 t1_je8gg68 wrote

It's certainly possible. But I've read his books, and his concepts to obtain immortality are through nanobots. Although I have great respect for this man, I'm have no clue where we are in terms of nano tech. I haven't read about any papers or notable research regarding this.

So yes, I think he is usually on to something when he makes his predictions, and I'm particularly in agreement with his AGI predictions, albeit he seems a little conservative compared to others, I'm not sure how quickly nano tech will advance to get us to the stage he expects by 2030.

2

ActuatorMaterial2846 t1_je8e3lg wrote

So what happens is they compile a dataset. Basically a big dump of data. For large language models, that is mostly text, books, websites, social media comments. Essentially as many written words as possible.

The training is done through whats called a neural network using something called a transformer architecture. Which is a bunch of GPUs (graphics processing units) linked together. What happens in the nueral network whilst training is a bit of a mystery, 'black box' is often a term used as the computational calculations are extremely complex. So not even the researchers understand what happens here exactly.

Once the training is complete, it's compiled into a program, often referred to as a model. These programs can then be refined and tweaked to operate a particular way for public release.

This is a very very simple explanation and I'm sure there's an expert who can explain it better, but in a nutshell that's what happens.

25

ActuatorMaterial2846 t1_je40on3 wrote

Jokes aside, I'm not sure you are considering all variables with your post. Sam Altman isn't the be all end all of AI, and although he is a smart dude, he is not even the brains behind OpenAI's development.

Furthermore, OpenAI/Microsoft are not the only players. They are the biggest in the public/commercial sector, but there are many different organisations working on this technology.

Things will change, and they will change drastically. We haven't had a societal shift in living memory, the industrial revolution being the last major example, and this advancement will indeed be orders of magnitude bigger.

That doesn't mean we are "fucked", but it does mean, once again, a shift in human hierarchal structure. It possible that money may eventually be no longer relevant. There are so many factors to consider.

Then there are the obvious positives, especially when it comes to health and medical advancements, but unimaginable leisure and pleasurable activities will be at your finger tips ☺️.

The obvious negatives too, hair trigger defence systems and automatic robotic weapons, mass propaganda and misinformation, scamming etc.

The world is changing, capalism is not robust enough to withstand it, there will be a new order that we all will have to adapt to. Scary, but nothing suggests it will be bad or good overall, but certainly intense and different.

Civilisation is fluid and forever changing, we just typically don't live long enough to see it happen. That is changing as technology speeds towards singularity and we live longer, and soon possibly indefintily.

4

ActuatorMaterial2846 t1_je1r5pm wrote

r/machinelearning seems to be a good sub. People seem grounded, yet have a very good understanding of the field and are up to date.

It's a little less fanciful than this sub, but that's why I like this sub.

2

ActuatorMaterial2846 t1_jd5oap8 wrote

I think fast, adaptive, un-aligned. I think the choice by openAI to go for profit shows a level of hubris amongst the creators in the sector.

It just seems so arrogant to close their research off and then spouting some pseudo intellectual drivel about alignment and the human condition in order to justify it, as if only they can find solve the mystery.

If it was to be human aligned, it needs to be open, where academics, intellectuals, and the general public can see the direction its heading in, not a small group of technocrats who think they know best for society.

5

ActuatorMaterial2846 t1_jaei6xk wrote

Transhumanism will be widely adopted. In fact, it kind of already is. Preventative vaccines, many of us are required to take in the early stages of our lives is a good example of how it will be adopted.

If you take Kurzweil's predictions regarding nanobots, the concept doesn't seem nearly as invasive as say, cutting your skull open to put a nueral link in your brain.

8

ActuatorMaterial2846 t1_jab26pm wrote

We are quite close imo. Chatbots are pretty dumb and most people will base their opinions on the likes of chatgpt or bing.

Some people seem to also think it's just slightly improved technology from decades ago and are not familiar with the advancements in transformer architecture and neural networks. Machines are learning on their own with no human input apart from initial parameters. Protein folding is a massive leap also. We are on the cusp of a new technological age now.

5

ActuatorMaterial2846 t1_jaax7vx wrote

The 'grabby aliens hypothesis' is quite compelling and many astro physicists and biologists seem to consider it even more plausible.

Basically, a series of hard steps need to be accomplished, and the galaxy, at least, is too young and too hostile for it to be swarming with intelligent advanced civilisations.

Fermi paradox has been around a while and there are few other theories, 'dark forst' as an example too. So I wouldn't succumb simply to such a basic and old concept when so many great minds have come up with plausible reasons to counter the great filter.

E: I guess this isn't a thread for discussion then...

E2: I just realised I'm in r/furturogy, makes sense now...

0

ActuatorMaterial2846 t1_ja7jpy2 wrote

Realistically, they should be forced to open their data to public scrutiny. This secrecy to one up one another in the name of profit is down right fucking dangerous. I'm certain these companies have some ethical questions to answer for.

E: Holy crap, lol the downvotes. How has this butthurt people so much.

2

ActuatorMaterial2846 t1_ja4xeos wrote

I'm an electrician, I have worked on all kinds of housing and buildings, old, new, prefab etc.

What I think he is saying is that it is similar to prefab buildings. With such designs, wires, plumbing, and general utilities are all pre installed and simply need to be hooked up to mains once the structure has been put together. In Altman's example, I believe he is looking at 3D printing specifically. Similar ideas have been proposed for Luna and Mars bases.

Essentially utilising the raw materials within the proposed structures environment. Piping would actually be pretty simple. It can be ceramic, for example. However, when it comes to electricity, you need conductive material that is not found just anywhere.

2

ActuatorMaterial2846 t1_j9yauge wrote

Yeah, I think people took that comment about 'instantly killing us by releasing a poison in the atmosphere' a bit too seriously. Maybe because it was so specific, idk.

But he does have a point that we should be concerned about an autonomous entity smarter than humans in all cognitive ability. An entity that has no known desire apart from a core function to improve and adapt to its environment.

Such an entity would most certainly begin competing with us for resources. So, his emphasis on alignment is correct, and he is probably not overstating the difficulty in achieving that.

Everything else he says is a bit too doomer with little to back it up.

5

ActuatorMaterial2846 t1_j9w2p6b wrote

>Beware the snake oil. They have impressive ML (“Machine Learning”) models built/trained from content, algorithms, and neural networks. That is not “AI” and it is not “AGI”. Beware the snake oil. Remember what it actually is. Don’t fall for the hucksters and word games. twitter.com/cccalum/status…

These comments annoy me. Of course it's AI in every definition of the term.

When you see someone say this, they are simply a denialist refusing to look at objective reality. You could beat someone like this in the head with objective truth and they would deny it with each blow. I will never understand such close minded dogmatic attitudes.

97

ActuatorMaterial2846 t1_j9r171j wrote

I'm pretty stupid, but I just want to grasp something if it can be clarified.

A basic function can be described as an equation with solid answer 1+1=2.

But what these nueral networks seem to do is take a basic function and provide an approximation. That approximation seems to be based on context, perhaps by an equation proceeding or succeeding it.

I've heard it described as complex matrices with inscrutable floating-point numbers.

Have I grasped this or am I way off?

2

ActuatorMaterial2846 t1_j9qma3y wrote

I'm more convinced that we may never create an AI with sentience. An AI will likely always mimic it though.

However, I do think an AGI and ASI are inevitable. Sentience isn't required for such things to exist.

Such intellegence just has to be similar to the alphago or alphafold models, except capable of doing all human cognitive tasks at that level or higher, and needs to be able to operate autonomously.

There are organisms that behave like this in the world, albeit not intelligent as we consider it or even alive, but still incredibly complex, autonomous and adaptable.

1

ActuatorMaterial2846 t1_j9dyomx wrote

Roblox and minecradt are apparently incorporating text prompts to build game worlds. It'll start quietly I think, with games mentioned but also modding.

The issue is that it requires a lot of power to generate this stuff, on top of the typical graphics generation. Perhaps there will be additional components within hardware in the very near future, but until then, it will have to be generated online.

1

ActuatorMaterial2846 t1_j93i5ce wrote

I actually kind of agree. Transformer architecture isn't the complicated part, it's the nueral networks held by large companies and governments which are very expensive. It's easy to see such tech remaining in the hands of the powerful, but I'm not convinced that's going to be the case in the near future.

There are already proven examples of this technology being completely open source. Stability AI is already leaps and abounds ahead of DALLE-2 for example.

When GPT and chatbots get nerfed, it will drive more people to seek out open source options. DALLE-2 is a locked out system and will likely be a payed platform, yet stable diffusion is open source and utilises a users own backend. I'm not sure big corps will be able to keep up.

However, my concern is the sophistication of the nueral networks that are no doubt classified, most definitely in the hands of government and military.

11