Viewing a single comment thread. View all comments

Ezekiel_W t1_j9m8nwm wrote

Much, much closer to an inevitability than an impossibility.

43

AnakinRagnarsson66 OP t1_j9mmdhh wrote

When will it happen?

5

bluzuli t1_j9mqqsr wrote

Probably immediately after AGI. Almost all ANI today are already superhuman because they have access to way more compute power and training than what a human brain is capable of, you would expect the same pattern to emerge once we have AGI

17

AnakinRagnarsson66 OP t1_j9nhoir wrote

Are you saying AGI will be able to upgrade itself into ASI?

3

bluzuli t1_j9nj6zm wrote

Mm not really, although that is also a possibility for ASI to improve itself.

I'm just pointing out that every ANI today is already superhuman because they have access to vast compute beyond what a human brain can achieve.

Any AGI system that appears would also benefit from this.

7

Vince_peak t1_j9nvluz wrote

>ASI

Any AGI will be ASI, as it will be able to perform any narrow intelligence task infinitely better than humans (think a calculator) but also know everything better than human (constant, continuous, instantaneous access to all data, internet, everything).

6

Ortus14 t1_j9mu59m wrote

It will happen in the next fifty years unless there's a nuclear winter or something that destroys most of human life before then.

8

Hands0L0 t1_j9pzdya wrote

I feel like the best metric I can think of that is totally feasible is this: When we are able to show an AI a video without dialogue, with all of the concepts being delivered strictly by how human actors are interacting in the video, if the AI is able to tell you all about the video in precise detail, we're right there. I honestly think this isn't very far off (10-20 years). There's plenty of Python APIs that are able to detect what objects are in live video, the next step is understanding interactions and once it can comprehend something that it itself can't ever reproduce, AGI is imminent.

1

Nano-Brain t1_j9qv94j wrote

But to be AGI the software must be able to "dream" up new things, not just recognize patterns because of big data. It must be able to produce its own data by coming to conclusions without any, or very little data initially given to it.

So, it could take longer. However, all it really takes is that "Aha!" moment from a computer scientist that could very quickly usher in the very first AGI models. After all, given the amount of time we humans have been trying to figure this out, one can assume that this major technological shift is just around the corner.

I assume the first models won't be the last models. So, there will still be more time required after the first model is created.

But its this first model that inevitably will usher in the singularity, because humans will not be the ones doing the engineering after this point. It will be the software modifying or upgrading itself.... fast and better with each iteration.

1

Hands0L0 t1_j9qy2ad wrote

I mean, not every human has the creativity to create new things. But that doesn't mean they aren't intelligent

1

Nano-Brain t1_j9r1gx0 wrote

I dont think that's true. I think even the dumbest humans have dreams that generate new ideas, however abysmal they may be.

But even if you're correct, unless the AI can extrapolate the data we give it into brand new hallucinations that dream up things we've never thought of, then it will never be different or smarter than us. This is because it will always be beholden to the data that we manually feed it.

1

Hands0L0 t1_j9r49i6 wrote

I think you may be overstating human creativity. There are plenty of visionaries among us who create new concepts, but the vast many of us are -boring-. We share the same memes and when we try to make our own memes they fall flat. How many people do you know have tried to write a book, and it ends up being rife with established tropes? How many hit songs use the same four chord progression? When was the last time you experienced something -truly- unique? It's been a long time for me, that's for sure.

So I don't think "making something totally unique" is the best metric for AGI. Being able to infer things? That's where I'm at. But I'm not an expert, so don't take what I'm claiming as gospel

1