Submitted by atomsinmove t3_10jhn38 in singularity

I'm someone who agrees with Metaculus/Kurzweil timeline for AGI, or even before that, but that got me thinking this is such an important issue that looking at other side's perspective is important.

So I encourage you to posit your arguments about why you think AGI/ASI is decade/s away. Do mention what is your expected timeline along with counter arguments for it. I'll start -

The best reason why my expectations might be optimistic is that recent advancements are showing how certain things done by humans aren't as hard as possible, one example would be creativity, Creativity can be pretty vague, specially in areas of art and such.

So maybe the really hard problems of motor perception and control, physics research etc. are the main hurdles.

Note that I'm not sure about this, just trying to think about possible arguments for "other side". Continue.

15

Comments

You must log in or register to comment.

AsheyDS t1_j5kiejw wrote

2035+

The one AGI project I'm close to has a design, potential solutions for all the big problems, and a loose plan for implementation. So I'm going largely off of that, but funding, building, training, and testing takes time. Rushing it wouldn't help anything anyway.

The few others that I've seen that have potential (in my opinion of course) will probably get it eventually, but are missing some things. Whether those things become showstoppers or not has yet to be seen. And no, they have nothing to do with LLMs.

I also think that society needs to prepare. I'm actually becoming more comfortable with people calling non-AGI AGI because it will help people get used to it, and encourage discussion, get new laws on the books, etc. I don't think there's much use trying to pin an exact date on it, because even after the first real AGI is available, it will just be the first of many.

10

Baturinsky t1_j5kpoth wrote

How do you plan to make it not kill everyone, either by mistake in alignment, or someone intentionally running a misaligned AGI? I don't see how it can be done without extreme safety measures, such as many AIs and people keeping an eye on every AI and evey human at all time.

3

AsheyDS t1_j5l6v7c wrote

Their approach to safety, to put it simply, would be to keep it in an invisible box, watched by an invisible guard that intervenes covertly when needed to keep it within that box should it stray towards the outside.

You are right in that AIs and people are going to have to watch out for other people and their AIs. But even if you remove the AI component, you can still say the same. Some people will try to scam you, take advantage of you, use you, or worse. AI makes that quicker and easier, so we'll have to be on the lookout, we'll have to discuss these things, and we'll have to prepare and create laws anticipating these things. But if everyone can gain access to it equally, either as SAAS or open source and locally run, then there will be tools to protect against malicious uses. That's all that can be done really, and no one company will be able to solve that.

1

Baturinsky t1_j5ma4a3 wrote

If we don't have a robust safery system that works acroos the companies and across the states by that time, I don't see how we will survive that.

2

AsheyDS t1_j5myhgr wrote

We don't have a lot of time, but we do have time. I don't think there will be any immediate critical risks, especially with safety in mind, but what risk there is might even be mitigated by near-future AI. chatGPT for example may soon enough be adequate in fact-checking misinformation. Other AIs might be able to spot deepfakes. It would help if more people started discussing the ways AGI can potentially be misused, so everybody can begin preparing and building up protections.

2

Baturinsky t1_j5n2dnx wrote

Do you really expect for ChatGPT to go against the USA machine of disinformation? Do you think it will be able to give a balanced report on controversial issues, taking in account the credibility and affiliation of sources, and quality of reasoning (such as NOT taking into account the "proofs" based on "alleged" and "highly likely"). Do you think it will honestly present the point of views from countries and sources not affiliated/bought by USA and/or Dem or Rep party? Do you think it will let the user define the criteria for credibility by him/herself and give info based on that criteria, not push the "only truth"?

Because if it won't, and AI will be used as a way of powers to braiwash the masses, instead as a power for masses to resist brainwahsing, then we'll have very gullible population and very dishonest AI by the time it will matter the most.

P.S. And yes, if/when China or Russia will make something like ChatGPT, it will probably be pushing their government agendas just like ChatGPT pushes US agenda. But is there a hope for impartial AI?

1

AsheyDS t1_j5n68fi wrote

I mean, that's out of their hands and mine. I probably shouldn't have used chatGPT as an example, I just mean near-future narrow AI. It's possible we'll have non-biased AI over the next few years (or minimally biased at least), but nobody can tell how many and how effective they'll be.

2

Baturinsky t1_j5nwu4s wrote

I believe a capability like that could be a key for our survival. It is required for our Alignment as the humanity. I.e. us being able to act together for the interest of Humanity as a whole. As the direst political lies are usually aimed at the splitting people apart and fear each other, as they are easier to control and manipulate in that state.
Also, this ability could be necessary for strong AI even being possible, as strong AI should be able to reason successfully on partially unreliable information.
And lastly, this ability will be necessary for AIs to check each other AIs reasoning.

1

iiioiia t1_j5m1mue wrote

> Their approach to safety, to put it simply, would be to keep it in an invisible box, watched by an invisible guard that intervenes covertly when needed to keep it within that box should it stray towards the outside.

Can't ideas still leak out and get into human minds?

1

AsheyDS t1_j5mtpp0 wrote

Can you give an example?

1

iiioiia t1_j5mzfu0 wrote

Most of our rules and conventions are extremely arbitrary, highly suboptimal, and maintained via cultural conditioning.

1

AsheyDS t1_j5n7s65 wrote

The guard would be a compartmentalized hybridization of the overall AGI system, so it too would have a generalized understanding of what bad undesirable things are, even according to our arbitrary framework of cultural conditioning. So could undesirable ideas leak out? Well, no not really. Not if the guard and other safety components are working as intended, AND if the guard is programmed with enough explicit rules and conditions and enough examples to effectively extrapolate from (meaning not every case needs to be accounted for if patterns can be derived).

2

iiioiia t1_j5nafg6 wrote

How do you handle risk that emerges years after something becomes well known and popular? Let's say it produces an idea that starts out safe but then mutates? Or, a person merges two objectively safe (on their own) AGI-produced ideas, producing a dangerous one (that could not have been achieved without AI/AGI)?

I dunno, I have the feeling there's a lot of unknown unknowns and likely some (yet to be discovered) incorrect "knowns" floating out there.

1

AsheyDS t1_j5njw0c wrote

>a person merges two objectively safe (on their own) AGI-produced ideas

Well that's kind of the real problem isn't it? A person, or people, and their misuse or misinterpretation or whatever mistake they're making. You're talking societal problems that no one company is going to be able to solve. They can only anticipate what they can, hope the AGI anticipates the rest, and future problems can be tackled as they come.

1

iiioiia t1_j5o1g81 wrote

This is true even without AI, and it seems we weren't ready (climate change) even for the technology we developed so far.

1

No_Ask_994 t1_j5o8d4v wrote

Is the invisible guard another AGI? Does it has its own guard?

1

AsheyDS t1_j5q48vw wrote

A hybridized partition of the overall system. It uses the same cognitive functions, but has separate memory, objectives, recognition, etc. They hope for the whole thing to be as modular and intercompatible as possible, largely through their generalization schema. So one segment of it will have personality parameters, goals, memory, and whatever else, and the rest will be roughly equivalent to subconscious processes in the human brain, which will be shared with the partition. As I understand it, the guard would be strict and static, unless it's objectives or parameters are updated by the user via natural language programming. So it's actions should be predictable, but if it somehow deviates then the rest of the system should be able to recognize it as an unexpected thought (or action or whatever), either consciously or subconsciously, which would feedback to the guard and reinitialize it, like a self-correcting measure. And once it has been corrected, it can edit the memory of the main partition so that it's unaware of the fault. None of this has been tested yet, and they're still revising some things, so this may change in the future.

1

Cr4zko t1_j5key8v wrote

How do you go from the LLMs of today to full blown AGI in 6 years?

6

red75prime t1_j5khfna wrote

You find a way to make it recurrent (keep state alongside input buffer), add memory (working, as a part of the said state, and long-term), overcome catastrophic forgetting in online learning, find efficient intrinsic motivations. Maybe it's enough.

6

MrEloi t1_j5ku3gu wrote

That is clearly the obvious next step.

I'm not sure how easy that will be 'tho.

It could be that a large 'frozen' model in combination with some clever run-time code and a modicum of short/medium term memory would suffice.

After all, the human brain seems (to me) to be a huge static memory plus relatively little run-time stuff.

3

No_Ninja3309_NoNoYes t1_j5knhnq wrote

The focus is currently on Deep Learning. So why would DL not bring AGI in its current form? First in simple terms, how does it work? The most common setup uses inputs and weights. The sum of the products is propagated. There are ReLus, batch normalisation, residual connections and all kinds of tricks in between. The outputs are checked against expected values. Weights are then updated to fit the expected outputs against given inputs.

There are multiple neural layers. That is why we speak of Deep Learning. So to use a crude analogy, imagine that you are the leader of a squad. Imagine that your soldiers understand 80% of your orders. Now imagine being the platoon leader. Imagine again that your squad leaders again understand 80% of your orders. How many of your orders reach your soldiers? Imagine having a hundred or more layers. Adding layers isn't free. And with almost all AI companies doing the same, we will run out of GPUs soon.

Also real neurons are more complicated than in the DL models. There are things like spiking, brain plasticity, neurotransmitters, and synapse plasticity that DL doesn't take into account. So the obvious solution is neuromorphic hardware and appropriate algorithms. It's anyone's guess when they will be ready.

5

red75prime t1_j5kx5ha wrote

Backpropagation is a tool that takes care of servicemen not getting the orders. There's the vanishing gradient problem affecting deep networks, but RELUs and residual connections seem to take care of it just fine. Mitigation of the problem in recurrent networks is harder though.

As for the brain... The brain architecture most likely is not the one and only architecture suitable for general intelligence. And taking into account that researchers get similar results when scaling up different architectures, there are quite a few of them.

6

tatleoat t1_j5kh284 wrote

Transitioning from an AI that only responds to prompt stimulus, to an AI that can take initiative of its own accord. That might still turn out to be surprisingly hard

4

QLaHPD t1_j5l9v0g wrote

An AI that only responds to prompts and an AI that has " will of its own" are the same thing. If you train a model to mimic the behaivor of a dog, for a outside observer it will look like it has developed some kind of initiative. The human behaivor can be expressed as a sequence of tokens, thus, you can train a model to predict the next action given context.

3

tatleoat t1_j5lal05 wrote

Yeah I totally agree, I don't really believe we won't have AGI until after 2029 but it was the only counter example I could think of

3

phriot t1_j5kyd45 wrote

Kurzweil's prediction is based on two parameters:

  1. The availability of computing power sufficient to simulate a human brain.
  2. Neuroscience being advanced enough to tell us how to simulate a human brain at a scale sufficient to produce intelligence.

I don't think that Kurzweil does a bad job at ballparking the calculations per second of the brain. His estimate is under today's top supercomputers, but still far greater than a typical desktop workstation. (If I'm doing my math right, it would take something like 2,000 Nvidia GeForce 4090s to reach Kurzweil's estimate at double precision, which is the precision supercomputers are measured at, or ~28 at half or full precision.)

That leaves us with the neuroscience. I'm not a neuroscientist, but I am another kind of life scientist. Computing power has followed this accelerating trend, but basic science is a lot slower. It is more of a punctuated equilibrium model than an exponential. Things move really fast when you know what to do next, and then it hits a roadblock while you make sense of all this new information you gather. It also relies on funding, and people. Scientists at the Human Brain Project consider real-time models a long term goal. Static, high resolution models that incorporate structure and other data (genomics, proteomics, etc.) are listed as a medium term goal. I don't know what "long term" is to this group, but I'm assuming it's more than 6 years. And if all that complexity is required, then Kurzweil is likely off by several orders of magnitude, which could put us decades out from his prediction. Then again, maybe you don't need to model everything in that great of detail to get to intelligence, but that goes against Kurzweil's prediction.

Of course, this all presupposes that you need a human brain for human-level intelligence. It's not a bad guess, as all things that we know to be intelligent have nervous systems evolved on Earth and share some last common ancestor. If we go another route to intelligence, that puts us back at factoring people into the process. We either need people to design this alternate intelligence architecture, or create weak AI that's capable of designing this other architecture.

While I could be wrong, and maybe you can slap some additional capability modules onto an LLM, let it run and retrain itself constantly on a circa 2029 supercomputer, and that will be sufficient. But I A) don't know for sure that will be the case, and B) think that if it does happen, it's kind of just a coincidence and not to the letter of Kurzweil's prediction.

4

Ortus14 t1_j5objyy wrote

I see no reason why understanding the human brain would be needed.

We have more than enough concepts and AGI models, we just need more compute imho. Compute (for the same cost) increases by a thousand times every ten years. So by Kurzweils 2045 date, compute for the same cost can be estimated to be 4.2 million times more than today.

Even if moors law ended the trend would continue because of the fact that server farms are growing at an exponential pace, and solar energy is dropping towards zero. If we have a breakthrough in fusion power it will accelerate beyond our models.

Today we can simulate vision (roughly 20% of the human brain) but we're simulating it in a way that's far more computationally efficient than the human brain, because we're making the absolute most out of our hardware.

It's pretty likely we'll reach super human level AGI well before 2045.

1

phriot t1_j5ovn3n wrote

I don't think that you have to simulate a human brain to get intelligence, either. I discuss that toward the end of my comment. But the OP asked about counterarguments to the Kurzweil timeline for AGI. Kurzweil explicitly bases his timeline on the those two factors: computing power and a good enough brain model to simulate in real time. I don't think that the neuroscience will be there in 6 years to meet Kurzweil's timeline.

If we get AGI in 2029, it will likely be specifically because some other architecture does work. It won't be because Kurzweil was correct. In some writings, Kurzweil goes further and says that we'll have this model of the brain, because we'll have this really amazing nanotech in the late 2020s that will be able to non-invasively map all the synapses, activation states of neurons, etc. I'm not particularly up on that literature, but I don't think we're anywhere close to having that tech. I expect that we'll need AGI/ASI, first, to get there before 2100.

With regards to your own thinking, you only mention computing power. Do you think that intelligence is emergent given a system that produces enough FLOPS? Or do you think that we'll just have enough spare computing power to analyze data, run weak AI, etc., that will help us discover how to make an AGI? I don't believe that intelligence is emergent based on processing power, or else today's top supercomputers would be AGIs already, as they surpass most estimates of the human brain's computational capabilities. That implies that architecture is important. Today, we don't really have ideas that will confidently produce an AGI other than a simulated brain. But maybe we'll come up with a plan in the next couple of decades. (I am really interested to see what a LLM with a memory, some fact-checking heuristics, ability to constantly retrain, and some additional modalities would be like.)

1

red75prime t1_j5kjfq2 wrote

I expect AGI around 2030.

I think that the most likely reason (but still not sufficiently probable to affect my estimate) for extending AGI timeline is that the brain does use quantum acceleration for some parts of its functionality.

2

TopicRepulsive7936 t1_j5nitb5 wrote

If it does it only means that quantum computation is really easy.

1

red75prime t1_j5np70q wrote

It could still require considerable time to get enough hints on how the brain does that to implement it technologically. Evolutionary solutions can be messy.

1

MrEloi t1_j5kueub wrote

I've no idea about the AGI timeline.

However, we are very close to quasi AGI.

We could have 99% human-like help desk, human like doctors etc within a year or so.

2

QLaHPD t1_j5lauxj wrote

What we might need is neural data, just wait until Neuralink release an dataset of 2 years of neural activity gathered from 2000 patients. With that you can train a diffusion model to generate brains.

2

AdorableBackground83 t1_j5kl73w wrote

Do you consider my predictions made at the start of 2023 to be pessimistic or conservative?

2030 - AGI

2040 - ASI

2050 - Singularity

Of course I hope I’m wrong and that these things happen a lot sooner. The sooner the better.

It’s just I don’t like to be too optimistic to the point to where it’s wishful thinking.

1

LambdaAU t1_j5ntumh wrote

The argument I find most convincing is that whilst technology is moving fast, politicts is always slow. Stuff like regulations and a general political misunderstanding of AI might lead to pushback against AI or they might purposely slow down development. Even if AI was made that could replace something such a mcdonalds workers the government may place a ban or tax on companies using AI in order to keep employment high. It sounds like a dumb solution (and I agree) but to politicians this might sound like a good solution.

1

Ortus14 t1_j5oachc wrote

Kurzweil has put AGI as surpassing human intelligence as 2045.

2029 is the date he put as Ai passing the turning test. I wouldn't be surprised if Chat-GPT can already pass the turning test.

Personally I agree with his dates and I also agree with him that it may happen sooner.

In 1990 Kurtzweil predicted an Ai would beat the worlds best chess player by 2000. It occurred in 1997.

1

Iffykindofguy t1_j5qewlj wrote

None of those people know, the people you think are genius are only one person at the head of a big team. Progress isn't made by one person, its always always always a group thing. So while they may know a lot in their field, they also for sure don't know a lot. Other people do, and with something as complex as this there's just no way to know since there's no collective tracking of who is doing what where

1