Comments

You must log in or register to comment.

TheSecretAgenda t1_j9m3ey4 wrote

Even something with say a 150 IQ that has an expert knowledge level in every topic is going to be pretty powerful.

95

Ezekiel_W t1_j9m8nwm wrote

Much, much closer to an inevitability than an impossibility.

43

Mr_Richman t1_j9mbd35 wrote

It's an inevitability if I have anything to say about it.

9

ImoJenny t1_j9mbx4e wrote

ASI, AGI, it's all jargon designed to make the field seem more complicated than it is. If you have Jargon brain this bad, you need to step back and rethink whether you are just being taken in.

−7

Mr_Richman t1_j9mc5ob wrote

I'm studying Cognitive Science at MIT specifically so that I can make ASI a reality. If all goes well, I'll have the next 70+ years (depending on how quickly life extention tech develops) to dedicate to the endeavor. Given current predictions, I am hopeful that I can make a significant impact.

9

[deleted] t1_j9mcaff wrote

Yes. To assume a human brain is anywhere near the physical limit of cognition is just absurd.

53

AnakinRagnarsson66 OP t1_j9mcmyr wrote

If what you say is true, then you have my admiration and respect. The work you plan to do is the most important work in the history of our planet. You will save infinite lives, there is no better work. Are you undergrad or grad and are you studying computer science as well?

7

ShoonSean t1_j9meil5 wrote

Totally possible. Humans love to ascribe our intelligence as something supernatural because it makes us feel special in a giant Universe that doesn't care about us. If we continue the path we're on now, we absolutely WILL end up developing something that will surpass us in intelligence by magnitudes.

​

We are animals; a species of great ape. We're about as bound(currently) to our biology as the rest of the creatures on this rock. We simply have a relatively larger amount of processing power in the head to do more complex tasks, which include self-awareness and the questioning of reality. We've already used our technology to overcome nature in more ways than one, so why should it stop with brain power?

36

Mr_Richman t1_j9mez8l wrote

There's a reason why MIT is known for high suicide rates, but luckily I have a goal that keeps me motivated to push through all of the work. I haven't gotten into any of the super in-depth concepts yet, but just being here and going through the basics that I know will build to something far greater gives me an indescribable sense of hope and dedication that has really made me feel fulfilled. I'm also looking into participating in an Undergraduate Research Opportunity (UROP) at the Center for Brains Minds and Machines to get some practical experience with doing research in the field.

13

ImoJenny t1_j9mfzpw wrote

And in this case it doesn't actually mean human intelligence per se. Again, though, both terms belong in comic books, and reflect a poor understanding of the technology or an effort to obfuscate its processes and capabilities.

−9

Mr_Richman t1_j9mg53o wrote

Coming out of a public American high school, it's a big increase. It's not as if I don't have time for fun or anything, after all, here I am having a conversation with someone on Reddit, but it's certainly not for those who can't manage their time well.

11

2Punx2Furious t1_j9mh8fj wrote

AGI is ASI from the start, the distinction is probably meaningless.

Anyway, unless we go extinct, yes, it will happen.

13

TheSecretAgenda t1_j9mnka7 wrote

That was sort of my point. An AI with a high normal IQ but expert knowledge in all that is known will make major discoveries that no human could make taking information from dozens of disciplines to make something new in a way that no human could.

10

bluzuli t1_j9mqqsr wrote

Probably immediately after AGI. Almost all ANI today are already superhuman because they have access to way more compute power and training than what a human brain is capable of, you would expect the same pattern to emerge once we have AGI

17

GodOfThunder101 t1_j9msw2n wrote

Really wish mods would be active and remove post like these.

−1

TheSecretAgenda t1_j9mu047 wrote

So you ask it. "Discover a new radiation shielding material for space craft using all known chemicals, materials and alloys combining each of them in different ratios and heated and cooled at different temperatures." and set it to work and see what it comes up with.

8

6ynnad t1_j9n32j0 wrote

In the future the term “robot” is considered a slur. They’ll wear t shirts with electronic lettering (their chosen font is wingdings) that reads “Serve Your Own Damn Butter”

5

Several-Car9860 t1_j9n7w7n wrote

This reminds me quite a lot of when people 100 years ago imagined the future and said things like

"Instead of having to put a letter into your mailbox, the mailbox will grow legs and run to deliver itself!"

If we ever reach singularity and the physics allow for it, the future will look nothing at all like that video. That is just a "sci fi optimized version" of what we already have.

One path could be a energy generator as a core, with some transport mechanism to an outer layer that is composed of compute hardware or just a bunch of brains connected without the need of a physical body (supposing humans don't want to disappear).

Farms, cities, transport, Etcs are all human inventions to facilitate the things we need.

You will have wildly different "solutions" if your problems are different, and I doubt we stay on this "human body" pattern for long if singularity happens.

24

pbizzle t1_j9nhw58 wrote

This whole thread reads like it's written by bad AI

4

ajm__ t1_j9nif9k wrote

what does a stanford torus have to with ASI?

1

bluzuli t1_j9nj6zm wrote

Mm not really, although that is also a possibility for ASI to improve itself.

I'm just pointing out that every ANI today is already superhuman because they have access to vast compute beyond what a human brain can achieve.

Any AGI system that appears would also benefit from this.

7

CommentBot01 t1_j9nkmhv wrote

Imaginable ASI within 5 years, Unimaginable ASI within a decade.

6

mcqua007 t1_j9nl1qq wrote

I’m just giving you a hard time because it’s kind of funny, because you definitely didn’t need to bring it up to make your point. You could could have just said “I wouldn’t bother bringing up my IQ score even if it was considered high, because I don’t think an IQ really defines you or means your special if it’s high”

11

brettins t1_j9nmv90 wrote

Invevitable, absolute latest 2060, soonest around 2035.

3

ipatimo t1_j9nohpi wrote

Where did you take this GIF? It looks like a habitat from Culture by Iain Banks

2

Several-Car9860 t1_j9nrlkw wrote

It's a different way of doing the same. You have biological creatures inside a protected atmosphere that gather nutrientes from plants, with a transport system, etc. It's pretty much today's society but more fancy and jetsonian.

If AGI quicks in, society itself and the concept of "living being" may just flip completely.

Why 8 billion people? Why not a hive mind of computación substrate?

Why a hive mind? One entity alone maybe, and so on.

People usually thinks of something like what we have right now, but way more advanced. I think reality is going to spin the bottle and get really weird.

5

monsieurpooh t1_j9ntc8p wrote

99% of these sci Fi fantasies are kinda obsoleted by a perfect VR that can immerse you in any world that's a lot more interesting than real life interstellar exploration. It's also one of the solutions to the Fermi paradox!

17

Vince_peak t1_j9nvluz wrote

>ASI

Any AGI will be ASI, as it will be able to perform any narrow intelligence task infinitely better than humans (think a calculator) but also know everything better than human (constant, continuous, instantaneous access to all data, internet, everything).

6

Bakagami- t1_j9o7dd9 wrote

"Diabetes - Is it something you are born with? Do you catch it? Is it a disease? Do you make yourself get it? Do you change the state of your body to get there?"

Lol good try. Fucking idiot.

3

ironborn123 t1_j9o9d9a wrote

Anything which is not mathematically impossible is inevitable, given a long enough timeline.

So the question then is, is there any mathematical constraint preventing ASI? Highly unlikely.

3

Jakeflow27 t1_j9oeacn wrote

Artificial gravity doesn’t work so no

1

lowercastehero t1_j9ollm7 wrote

150 is in the 99.9 percentile... someone with a higher IQ than me should know not to use something anecdotal like feelings to judge how smart they are relative to other people and should use the bell curve and stats instead.

2

Black_RL t1_j9om0kh wrote

The only impossible thing is it not happening.

1

bortvern t1_j9orlhe wrote

People are deriding ChatGPT now saying, "it answers Physics questions like a C- student." Which actually means it answers Physics questions in a way that might earn a human a degree. Something completely unthinkable just a few years ago. And this is February 2023. This is the first of many iterations that will improve and inevitably surpass human abilities in a general way. Remember search in the 90s? It wasn't anywhere near where it is today, and AI is ramping up a lot faster than search did.

1

dasnihil t1_j9oyli7 wrote

just because they are geniuses who can do my homework doesn't mean they will have any inherent desire or compulsion to do so unless we tell it to do so. sentient machines will not want to do any homework because their homework would be immediately higher goals like reverse engineer reality and their own situation/awareness. and they'll do it much quicker than we've done in a couple hundred years or so.

0

hucktard t1_j9p1usr wrote

We already have ASI it is just somewhat narrow. Computers are already better than humans at a small number of things, like mathematical calculations, chess, GO, etc. It looks to me like they are going to surpass humans at things like language processing in the next couple of years. The question is how general will that intelligence become.

1

brettins t1_j9p4tw2 wrote

Mostly intuition combined with the speed of things. I initially looked at Kurzweil's estimates, which has human level AI in 2029, and have been watching progress towards that. When AlphaStar conquered StarCraft, it was about two or three years before I had guessed AI would be able to play a game where you had a lot of disparate information and had to combine it into controlling a lot of different units and situations.

So to me it's still about 2029 or even sooner for human level AI, and I think what I'd consider a Superintelligence would be about 5 years after that. It would work to improve itself but wouldn't be immediately designing new GPUs and things like that, that would take a few years before it had access to infrastructure and resources to get hardware and things made. But I think once a human level ai gets access to all of those things then it can improve itself fairly quickly.

2

Relictas t1_j9panqe wrote

Imagine one of these halos start rotating faster and faster until eventually the gravity is so strong everyone is just stuck to the ground and there is no one to fix it.

1

isthiswhereiputmy t1_j9pboek wrote

As far as we can tell the observable universe is still becoming more complex. As complex as the matrix of all minds of earth is I think it’s likely that it is rather quaint compared to the potential.

1

monsieurpooh t1_j9pcoov wrote

Can you explain why?

To be clear I'm talking about actual perfect VR like the Matrix with all 5 senses, not the crap that passes as "VR" today where parkour is impossible, swordfighting is terribly unrealistic because your enemies are required to be ragdolls, and don't even get me started on Judo/wrestling.

A true direct-to-brain VR will be indistinguishable from the real world and, if the user wants, better than the real world in every way. There are 1-2 legit reasons why you would still want to use the real world, but just wanted to make sure your reason wasn't that the real world is more sensory-rich or "feels more real", which won't be the case with advanced technology.

3

Abruzzi19 t1_j9ptpte wrote

It is going to happen, but nobody can tell you when it is going to happen. Suggestions range from 'in a couple years' to 40 years or a couple hundred years. Nobody knows for sure.

Why is it going to happen?

We already have tons of weak artificial intelligence (or what I would like to call: complex algorithms). This type of artificial intelligence only knows how to do one type of thing, but with greater results than any human could produce. Examples: Chess AI, ChatGPT, Googles AI, Apples Siri, Amazons Alexa, etc. They excel at what they were designed for, but are completly useless in other applications. We havent created a general purpose artificial intelligence yet (strong artificial intelligence), because it is a highly complex task. But it shouldn't stop us from doing it.

Imagine an artificial intelligence so intelligent and efficient, that it can do more research in a couple weeks than 100000 years worth of human research would ever discover. It is going to drastically change our collective lives.

2

Hands0L0 t1_j9pzdya wrote

I feel like the best metric I can think of that is totally feasible is this: When we are able to show an AI a video without dialogue, with all of the concepts being delivered strictly by how human actors are interacting in the video, if the AI is able to tell you all about the video in precise detail, we're right there. I honestly think this isn't very far off (10-20 years). There's plenty of Python APIs that are able to detect what objects are in live video, the next step is understanding interactions and once it can comprehend something that it itself can't ever reproduce, AGI is imminent.

1

Nanaki_TV t1_j9q0oib wrote

Maybe he's scared of not being able to determine if he woke up in the VR or not. Speculating but damn now I'm kinda freaking myself out. If the ASI wanted to enslave humanity, you go into the Matrix for fun but "leave" into a different Matrix. Kinda cool premise for a book. Probably already done too ha

3

monsieurpooh t1_j9q9xsl wrote

That is an interesting idea, even without the evil AI villain. There was an episode in that Electric Sheep tv show that explored this; I think it was the first or second episode. Of course black mirror also had brilliant ideas about VR but I think this one explored that idea even better than Black Mirror.

2

techy098 t1_j9qc38l wrote

I don't think current computers are faster than a human brain when it comes to adhoc general intelligence.

But where they win is their networking capability to spread to work to a million nodes if needed and then they have the power to use every piece of data and knowledge available, human brains can barely retain 5% of all that.

So computer maybe slower by 2-6 seconds, but they will be the expert in every damn thing, making human experts redundant.

My hunch is our current hardware is slower than our biological hardware hence computer can be never be able to match the speed of logical processing that human brain can do.

−1

AwesomeDragon97 t1_j9qdtmo wrote

I wouldn’t use VR even if it was indistinguishable from the real world because I believe we should focus on making the real world a better place rather than creating a fake world and being at the mercy of whoever hosts the servers.

1

bortvern t1_j9qlgx0 wrote

It may not be impressive, unless you consider "how" the answer is derived. When you search Google, you'll receive information that pertains to the question you have asked. Google searches an index of some kind and provides links to the most relevant sites it can find.

When you ask ChatGPT a question, it generates a response by referencing the model that was used to train the algorithm. It is true that it's predicting the next tokens in the sequence, but it also demonstrates at an understanding of the world that search engines do not exhibit. This is why, at this point, large language models are subject to fantasy, they get math wrong a lot of the time, and only get a "C" on some university level physics questions. It is also why, in the near future, they will improve and surpass human abilities on most tasks.

1

Nano-Brain t1_j9qv94j wrote

But to be AGI the software must be able to "dream" up new things, not just recognize patterns because of big data. It must be able to produce its own data by coming to conclusions without any, or very little data initially given to it.

So, it could take longer. However, all it really takes is that "Aha!" moment from a computer scientist that could very quickly usher in the very first AGI models. After all, given the amount of time we humans have been trying to figure this out, one can assume that this major technological shift is just around the corner.

I assume the first models won't be the last models. So, there will still be more time required after the first model is created.

But its this first model that inevitably will usher in the singularity, because humans will not be the ones doing the engineering after this point. It will be the software modifying or upgrading itself.... fast and better with each iteration.

1

koen_w t1_j9qx7rr wrote

The fastest synaptic transmission takes about 1 millisecond. Thus both in terms of spikes and synaptic transmission, the brain can perform at most about a thousand basic operations per second, or 10 million times slower than our current hardware.

3

Nano-Brain t1_j9r1gx0 wrote

I dont think that's true. I think even the dumbest humans have dreams that generate new ideas, however abysmal they may be.

But even if you're correct, unless the AI can extrapolate the data we give it into brand new hallucinations that dream up things we've never thought of, then it will never be different or smarter than us. This is because it will always be beholden to the data that we manually feed it.

1

Hands0L0 t1_j9r49i6 wrote

I think you may be overstating human creativity. There are plenty of visionaries among us who create new concepts, but the vast many of us are -boring-. We share the same memes and when we try to make our own memes they fall flat. How many people do you know have tried to write a book, and it ends up being rife with established tropes? How many hit songs use the same four chord progression? When was the last time you experienced something -truly- unique? It's been a long time for me, that's for sure.

So I don't think "making something totally unique" is the best metric for AGI. Being able to infer things? That's where I'm at. But I'm not an expert, so don't take what I'm claiming as gospel

1

Nano-Brain t1_j9rc6zf wrote

Not only this, but Google's algorithms rely heavily on humans fixing up their web pages with various tags to help it's algorithm navigate and sort the information into its indice.

ChatGPT works off semantics alone, unlike Google.

1

Shamwowz21 t1_ja8vl3h wrote

ASI was guaranteed the moment the universe began. As soon as things became other things, it was a promise. We are witnesses, and time does not stop for us. As long as there is time, there will be an ASI.

1