Viewing a single comment thread. View all comments

Agreeable_Bid7037 t1_j14ubxq wrote

true, and people are already working on ways to create better A.I. using existing A.I. so AGI may arrive quite abruptly soon.

13

TouchCommercial5022 t1_j15jo1r wrote

⚫ AGI is entirely possible; If it turns out that there is some mysterious unexplained process in the brain responsible for our general intelligence that cannot be replicated digitally. But that doesn't seem to be the case.

Other than that, I don't think anything short of an absolute disaster can stop it.

Since general natural intelligence exists, the only way to make AGI impossible is by a limitation that prevents us from inventing it. Its existence wouldn't break any laws of physics, it's not a perpetual motion machine, and it might not even be that impractical to build or operate if you had the blueprints. But the problem would be that no one would have the plans and there would be no way to obtain them.

I imagine this limitation would be something like a mathematical proof that using one intelligence to design another intelligence of equal complexity is an undecidable problem. On the other hand, evolution did not need any intelligence to reach us...

Let's say a meteor was going to hit the world and end everything.

That's when I'd say AGI isn't likely.

Assume that all intelligence occurs in the brain.

The brain has in the range of 1026 molecules. It has 100 billion neurons. With a magnetic sound (perhaps an enhancement of the current state) we can get a snapshot of an entire working human brain. At most, an AI that is a general simulation of a brain only has to model this. (It's "at most" because the human brain has things we don't care about, for example, "I like the taste of chocolate.") So we don't have to understand anything about intelligence, we just have to reverse engineer what we already have.

There are two additional things to consider:

⚫ If you believe that evolution created the human mind and its property of consciousness, then machine modeled evolution could theoretically do the same without a human needing to understand all the ins and outs. If consciousness came into existence without a conscious being trying once, then it can do so again.

⚫ Alphago, the Google AI that beat one of Go's top champions, was so important explicitly because it showed that we can produce an AI that can find the answers to things we don't quite understand. In chess, when the deep blue was made, the IBM programmers explicitly programmed a 'value function', a way to look at the board and judge how good the board was for the player, eg "having a queen are ten points, having a rook is 5 points, etc., add it all up to get the current value of the board."

With Go, the value of the board isn't something humans have figured out how to explicitly compute in a useful way; a stone in a particular position could be incredibly useful or harmful depending on the moves that could happen 20 turns down the road.

However, by giving Alphago many games to look at, Alphago eventually figured out using its learning algorithm how to judge the value of a board. This 'intuition' is the key to showing that AI can understand how to do tasks for which humans cannot explicitly write rules, which in turn shows that we can write AI that could understand more than we can, suggesting that, in the worst case, we could write 'bootstrapping' AIs that learn to create a real AI for us.

Many underestimate the implication of "solving intelligence". Once we know what intelligence is and how to build and amplify it, all artifacts will be connected to a higher-than-human intelligence that works at least thousands of times faster...and we don't even know what kind of emerging abilities lie beyond it. . Human intelligence. It's not just about speed. we can simply predict speed and accuracy, but there could be more.

The human brain exists. It's a meat computer. It's smart. It's sensitive. I see no reason why we can't duplicate that meat computer with electronic circuitry. The Singularity is not a question of if, but when.

We need a Manhattan Project for AI

AGI's superintelligence will advance so rapidly once the tipping point has passed (think minutes or hours, not months or years) that even the world's biggest tech nerd wouldn't see it coming, even if it happened outright.

when will it happen?

Hard to tell because technology generally advances as a series of S-curves rather than a simple exponential. Are we currently in an S-curve that leads rapidly to full AGI or are we in a curve that flattens out and stays fairly flat for 5-10 years until the next big breakthrough? Also, the last 10% of progress might actually require 90% of the work. It may seem like we're very close, but resolving the latest issues could take years of progress. Or it could happen this year or next. I don't know enough to say (and probably no one does)

It's like quantum physics. In the end, 99.99% of us have no fucking idea. It could take 8 years, 80 years or never.

Personally, I'm more on the side of AGI gradually coming into our lives rather than turning it on one day.

I imagine narrow AI systems will continue to seep into everything we use, as it already is. (Apps, games, creating music playlists, writing articles) But that they will eventually get more options as they develop. Take the most recent coronation achievement: GPT-3. I don't see it as an AGI in any sense, but I don't see it as totally narrow either. You can do multiple things instead of one. It can be a chatbot, an article writer, a code wizard, and much more. But he is also limited and is quite amnesiac when it comes to chatting, as he can only so far remember his own past, breaking the illusion of speaking to something intelligent.

But I think these problems will go away over time as we discover new solutions and new problems.

So for TL; DR. I feel like the AI ​​will gradually narrow down to general AI over time.

Go to the extreme for fun. We could end up with a chatbot assistant that we can ask almost anything to help us in our daily lives. If you're in bed and can't sleep, you may be able to talk, if you're at work and having trouble with a task, you may be able to ask for help, etc. It would be like a virtual assistant I guess. But that's me fantasizing about what could be and not a prediction of what will be.

2029 seems pretty viable in my opinion. But, I'm not too convinced that it will infuse into society and over 70% of a population's personal life. There is also the risk of a huge public backlash against the AI ​​if some things go wrong and give it a bad image.

But if. 2029 seems feasible. 2037 is my most conservative estimate.

Ray Kurzweil was the one who originally specified 2029. He chose that year at the time because, extrapolating forward, it seemed to be the year the world's most powerful supercomputer would achieve the same capacity in terms of "instructions per second" as a human being. brain.

Details about the computing capabilities have changed a bit since then, but its estimated date remains the same.

It could be even earlier.

If the scale hypothesis is true, that is. We are likely to see AI with 1 to 10 trillion parameters in 2021

We will see 100 trillion by 2025 according to open AI

The human brain is 1000 trillion. Also, each model is trained on a newer better architecture.

I'm sure something has changed in the last 2-3 years. I think maybe it was the transformer.

In 2018, Hinton was saying that general intelligence wasn't even close and we should scrap everything and start over.

In 2020, Hinton said that deep networks could actually do everything.

According to kurzweil, this has been going on for a while.

People in the 90s saying that AGI is thousands of years away

Then later in the 2000s, saying it's only centuries away

To the 2010s with deep learning people saying it's only a few decades away

AI progress is one of our fastest exponentials. I'll take the 10-year bet for sure.

6

visarga t1_j15tcrf wrote

> like a mathematical proof that using one intelligence to design another intelligence of equal complexity is an undecidable problem

No, it's not like that. Evolution is not a smart algorithm, but it created us and all life. Even though it is not smart, it is a "search and learn" algorithm. It does massive search, and the result of massive search is us.

AlphaGo wasn't initially smart. It was just a dumb neural net running on a dumb GPU. But after playing millions of games in self-play, it was better than humans. The way it plays is by combining search + learning.

So a simpler algorithm can create a more advanced one, given a massive budget of search and ability to filter and retain the good parts. Brute forcing followed by learning is incredibly powerful. I think this is exactly how we'll get from chatGPT to AGI.

3

Vitruvius8 t1_j177x96 wrote

How we look at and interpret consciousness could all be a cargo cult mentality. We might not be on the route at all. Just making it appear like it.

1

matt_flux t1_j17zv3s wrote

We aren’t just meat computers, we are alive, conscious, and have a drive to create. We are made in the image of God, and AI will always lack that.

−1