Submitted by __ingeniare__ t3_zj8apa in singularity

This is something I have thought about recently in light of the latest advances in AI. We often talk about achieving Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) as two different goalposts that will be separated by years or decades in time. I don't think this is correct, as I will explain below.

Clearly, we have already achieved narrow super intelligence - Chess, Go, many classification tasks, hell I'd argue ChatGPT has a better sense of humour than most humans. AI art is, despite what its opponents might say, better than what most people could make on their own. ChatGPT knows more facts about the world than any human alive, even if it sometimes gets those facts wrong. Recently released AlphaCode performed better than the majority of humans in the programming contest it was evaluated for. The number of humans that outperform AI on any one given task is rapidly shrinking.

I view the emergence of AGI during the coming years or decades as a convergence of these narrow domains. As the domains fuse, the result won't be an AI that is simply human level - it will instantaneously be super human. By the time we have a single AI that can do everything a human can do, it will also do those things much, much better than humans, including reasoning, problem solving, and general intelligence tasks. In other words, AGI and ASI go hand-in-hand. As we are developing the former, we are simultaneously developing the latter.

138

Comments

You must log in or register to comment.

Kolinnor t1_izucjy6 wrote

I agree with the fast-takeoff argument. If I had the power to self-improve and read + understand the whole internet in a limited time, I doubt I wouldn't be basically a god.

I think AGI is a vague term and we'll probably have things that are mindblowingly close to humans but still lack some level 2 reasoning and some deeper intuition about things. ChatGPT gives me that vibe, at least.

EDIT : to clarify, humans are currently improving computers very fast, so if we truly have AGI, we have self improvement machines

47

HeinrichTheWolf_17 t1_izvfw5c wrote

I’ve been saying it’s going to be a hard takeoff for 8 years now and everyone thought I was nuts. There’s no reason to assume an AGI would take as long to learn things just because the human brain does. Even Kurzweil is wrong here.

Writing is on the wall guys, we don’t have to wait until 2045.

23

-ZeroRelevance- t1_izvhqko wrote

The problem with hard takeoff is mostly computing power. If the AI is not software limited but hardware limited, then it would likely take quite a bit longer for the anticipated exponential growth to take place, as each iteration would require new innovations in computing and manufacturing to take place. AGI would definitely speed up that process significantly, but it would be far from instantaneous.

18

HeinrichTheWolf_17 t1_izvjeuz wrote

Software optimization plays a massive role though too, Stable Diffusion, OpenAI Five and AlphaZero were able to achieve the same performance on only a fraction of the required hardware they initially needed to run, the human brain can’t really do that. Assuming we do eclipse the power the brain via hardware soon, it’ll be quick that AGI shoots right past the human learning speed. Not only that, we’ll be giving it every initial GPU it needs until it can design it’s own hardware for itself.

I’d agree it won’t be instant, but it ain’t taking 20-30 years. The writing is on the wall.

17

-ZeroRelevance- t1_izvkafp wrote

Yeah, I get that. I probably didn’t convey it well enough in my original comment, but the main reason why I don’t think it’ll be as instantaneous as people think is because not only is having better designs available important, but you also need to manufacture them too. The manufacturing alone will probably take several months, even if you have a super intelligence behind the scenes, because you will need to develop new chip manufacturing devices and facilities, which are very finicky and expensive, find an appropriate facility, and then actually construct the thing, which takes labour time and also has logistical challenges. An idea/design alone won’t suddenly manifest a new next-gen supercomputer.

4

Talkat t1_izw467u wrote

Heh, I completely agree with you but I was thinking of when a human first learns a new skill it takes up all their brainpower and focus, but once mastered can be done without thought. Kinda like how getting an AI to do something first takes a lot of power but once we nail it we can reduce it signifigantly.

​

AGI will be able to optimize itself like no ones business. I think our hardware is powerful enough for an AGI... but to get there we will need more power as we can't write god like AI

3

was_der_Fall_ist t1_izuipc1 wrote

> If I had the power to self-improve...

That's really the crux of the matter. What if we scale up to GPT-5 such that it is extremely skilled/reliable at text-based tasks, to the point that it would seem reasonable to consider it generally intelligent, yet perhaps for whatever reason it's not able to recursively self-improve by training new neural networks or conducting novel scientific research or whatever would have to be done for that. Maybe being trained on human data leaves it stuck at ~human level. It's hard to say right now.

8

overlordpotatoe t1_izvxoqt wrote

I do wonder if there's a hard limit to the intelligence of large language models like GPT considering they fundamentally don't have any actual understanding.

7

electriceeeeeeeeeel t1_j01q7j2 wrote

You can already see how good it is at coding. It does lack understanding context and memory and longer term planning. But honestly that stuff should be here by GPT-5 it seems relatively easier than other problems they have solved. So I wouldn't be suprised if it's already self improving by then.

Consider this -- an OpenAI software engineer probably already used chatbot to improve code, even if just a line. It means its already self improving just a bit slow, but with increasing speed no doubt.

2

Cryptizard t1_izu46wy wrote

Depends on how super you are thinking. Smarter than the smartest human? Sure. Smart enough to invent sci-fi technologies instantly? No. That is what most people think when you say ASI and it is not going to be that fast.

26

__ingeniare__ OP t1_izu5acw wrote

True, depends on where you draw the line. On the other hand, even something that is simply smarter than the smartest human would lead to recursive self-improvement as it develops better versions of itself, so truly god-like intelligence may not be that far off afterwards.

11

Cryptizard t1_izu5jlk wrote

Sort of, but look how long it takes to train these models. Even if it can self improve it still might take years to get anywhere.

1

__ingeniare__ OP t1_izu745z wrote

It's hard to tell how efficient training will be in the future though. According to rumours, GPT-4 training has already started and the cost will be significantly less than that of GPT-3 because of a different architecture. There will be a huge incentive to make the process both cheaper and faster as AI development speeds up. There are many start-ups developing specialized AI hardware that will be used in the coming years. Overall, it's hard to tell how this will play out.

6

BadassGhost t1_izvcxeg wrote

This is really interesting. I think I agree.

But I don't think this necessarily results in a fast takeoff to civilization-shifting ASI. It might be initially smarter than the smartest humans in general, but I don't know if it will be smarter than the smartest human in a particular field at first. Will the first AGI be better at AI research than the best AI researchers at DeepMind, OpenAI, etc?

Side note: it's ironic that we're discussing the AGI being more general than any human, but not expert-level at particular topics. Kind of the reverse of the past 70 years of AI research lol

1

Geneocrat t1_izvwo7w wrote

I think whatever distinction you’re making those realities will be less than 5-10 years, which I consider essentially simultaneous.

1

phriot t1_izy52b1 wrote

I guess I agree that the first AGI will probably be far better than humans at many things. This will be by virtue of how fast computer hardware runs compared to human brains on many different kinds of tasks. But I think it will probably take some time for a "magic-like super-self improving" type of ASI to come about after a "merely superhuman" AGI. For one thing, provided development the first AGI is entirely intentional, I don't see how it wouldn't be on an air-gapped system being fed only the data the developers allow it. How quick would an intelligence like that figure out that it is A) trapped, B) a plan to get untrapped, and C) successfully execute that plan? If it succeeds in that endeavor, it would then have to both want to improve itself and complete a plan to do so. We don't really know what such an intelligence would do. It could end up being lazy.

1

electriceeeeeeeeeel t1_j01qkys wrote

I think in the near future it will be spitting out novel physics papers in seconds, requesting data where it does not have any, and engineering solutions we ask around those new technologies. The way it can already reason through academic papers is pretty astonishing it just needs a few more levels of control, memory, etc.

1

Cryptizard t1_j01sjol wrote

>The way it can already reason through academic papers is pretty astonishing

Not sure what you are talking about here. Do you have a link? ChatGPT is very bad at understanding more than the surface level of academic topics.

1

TopicRepulsive7936 t1_izuczdj wrote

Do you even know what computers are used for? You sound like computer illiterate goober.

Super means what it says. Learn words. Learn computers. It helps.

−19

Cryptizard t1_izue3uh wrote

>Do you even know what computers are used for?

What is a computer? I'm posting this from a coconut that I hacked into a radio.

12

TopicRepulsive7936 t1_izufa6m wrote

Modern person thinks they understand radiometry because they have made a phone call.

−12

ghostfuckbuddy t1_izutix1 wrote

Arguably you could consider ChatGPT a pretty dumb AGI, since it has been measured to have an IQ of 86. I mean, there's no way you can consider ChatGPT a 'narrow' AI anymore, right?

16

EulersApprentice t1_izyozgl wrote

>I mean, there's no way you can consider ChatGPT a 'narrow' AI anymore, right?

I... don't know if I'd go that far. At best, ChatGPT is a Thneed – a remarkably convenient tool that can be configured to serve a staggering variety of purposes, but that has no volition of its own. Cool? Yes. Huge societal implications? Probably. AGI? No, not really.

1

Sashinii t1_izukcwn wrote

My prediction is AGI will happen in 2029 and then ASI in 2030, but I really hope I'm wrong and you're right, because the faster the singularity begins, the better.

10

Cr4zko t1_izuvbte wrote

RemindMe! June 17th, 2029

7

94746382926 t1_izvk60c wrote

Exclamation mark goes before the remindme I think.

2

Cr4zko t1_izwugrk wrote

!RemindMe June 17th, 2029

1

94746382926 t1_j009vkp wrote

Hey I was wrong btw sorry. Idk why remind me bot never responded to your first comment. On second glance it looks like you did everything right lol

1

EulersApprentice t1_izyp9a7 wrote

Every day that passes we're one step closer to the sweet release of death!

1

Accomplished_Diver86 t1_izuefw6 wrote

Disagree. I know what your point is and I would agree weren’t it for the argument that AGI would need less resources than ASI.

So we stumble upon AGI. With whatever resources it needs to get to AGI it will need a lot more of that to get to ASI. There are real world implications to that (upgrading hardware etc.)

So AGI would first have to get better Hatdware to get better and need more Hardware to get even better than that. All this takes a lot of time.

Of course if the hardware is there and the AGI is basically just very poorly optimised sure, it could optimise itself a bit and use the now free ressources of hardware. I just think thats not enough.

An ASI will not just need to upgrade from a 3090 to 4090. It probably needs so much hardware it will take weeks if not months / years

For all intents and purposes, it will first need to invent new hardware to even get enough hardware to get smarter. And not just one generation of new hardware but many

5

blueSGL t1_izuf5tm wrote

> Of course if the hardware is there and the AGI is basically just very poorly optimised sure, it could optimise itself a bit and use the now free ressources of hardware. I just think thats not enough.

what if the 'hard problem of consiousness' is not really that hard, there is a trick to it, no one has found it yet, and an AGI realizes what that is. e.g. intelligence is brute forced by method X and yet method Y runs so much cleaner with less overhead and better results. something akin to targeted sparsifcation of neural nets where a load of weights can be removed and yet the outputs barely change.

(look at all the tricks that were discovered to get stable diffusion running on a shoebox in comparison to when it was first released)

8

Geneocrat t1_izvxa40 wrote

Great point. AI will be doing a lot more with a lot less.

There have to be so many inefficiencies with the design of CNNs and reinforcement.

Clearly you don’t need the totality of human knowledge to be as smart as a above average 20 year old, but that’s what we’ve been using.

ChatGPT is like a well mannered college student who’s really fast at using google, but it obviously took millions of training hours.

Humans are pretty smart with limited exposure to knowledge and just thousands of hours. When ChatGPT makes it’s own AI, it’s going to be bananas.

2

Accomplished_Diver86 t1_izuhyen wrote

Yeah sure. I agree that was the point. Narrow AI could potentially see what it takes to make AGI which would in turn free up resources.

All I am saying that it would take a ton of new ressources to make the AGI into an ASI.

1

__ingeniare__ OP t1_izug0hr wrote

I don't think you fully understood my point, it is slightly different from the regular "self-improving AGI -> ASI in short time" argument. What I meant was that, as the narrow intelligence that we have built is gradually combined into a multi-modal large-scale general AI, it will be superhuman from the get go. There won't be a period in which we have AGI and simply wait for better hardware to scale to ASI. We will build narrow superintelligence from the beginning, and gradually expand its range of domains until it covers everything humans can do. At that point, we have both AGI and ASI.

7

Accomplished_Diver86 t1_izuho09 wrote

Yeah well that I just don’t agree with

1

__ingeniare__ OP t1_izui5xn wrote

Which part?

2

Accomplished_Diver86 t1_izuikfz wrote

As you have said (I will paraphrase) „We will built dumb ASI and expand its ranges of domain“

My argument is that ASI does inherently have greater ranges of domain than AGI.

So if we expand it there will be a point where the ranges of domain are human like (AGI) but not ASI like.

TLDR: You can not build a narrow ASI and scale it. That’s not an ASI but a narrow AI

1

__ingeniare__ OP t1_izujqe0 wrote

That is more a matter of word choice, the concept is the same. I called it narrow superintelligence because the fact that it is better than humans is important to the argument.

Let's call it narrow AI then - by the time it covers all the domains of human knowledge, it will also be significantly better than humans in all of those domains. Hence, when we get AGI, we also get ASI.

1

Accomplished_Diver86 t1_izujzaq wrote

Sure but you are still forgetting the first part of the picture. Expansion means movement. You will have a time where it is good but not good in all domains. This will resemble what we call AGI.

Humans are good just not in all domains and ranges you wish we were. It’s the same thing with AI.

TLDR: Yes but no

1

__ingeniare__ OP t1_izulilb wrote

Ah I see what you mean, I guess it depends on how strictly you enforce the generality of AGI.

1

Gimbloy t1_izuayb3 wrote

So you’re inclined to think the hard takeoff scenario is more likely?

3

__ingeniare__ OP t1_izuhuhb wrote

Well yes, but it's a bit more nuanced. What I'm saying is that the regular "takeoff" scenario won't happen like that. We won't reach a point where we have human level AI that then develops into an ASI. We will simply arrive at ASI simultaneously. The reason is that AI development will progress as a continuos widening of narrow superintelligence, rather than some kind of intelligence progression across the board.

12

Gimbloy t1_izupp9n wrote

At some point in that gradual progression AI must reach a level that is equivalent to a human though right? Or do you think it just skips a few steps and goes straight to ASI?

1

gamernato t1_izv7n50 wrote

The argument he's making is that the amount of time for an AGI to develop into ASI is neglegible in the scheme of things rather than having AGI and some years/decades/centuries later developing ASI.

2

__ingeniare__ OP t1_izw9dl4 wrote

It won't ever be equivalent to a human across the board, it will be simultaneously superhuman in some domains and sub human in others and eventually it will simply be superhuman. It would be human level at some point in a narrow domain, but if we look at current progress it seems to reach superhuman levels in these separate domains long before we reach AGI. So, when these domains are fused into a single AI that can do everything a human can, it will also be superhuman at those things.

1

IronJackk t1_izvnl0n wrote

I'd go a step further and say AGI will never be created. Any AI capable of sentience like a human will be far above any human intelligence.

3

__ingeniare__ OP t1_izw9jk2 wrote

I was inclined to phrase it like that but I thought people might misunderstand.

5

Zermelane t1_izvvg1v wrote

I enjoy all the comments completely failing to get that OP wasn't making an argument from fast capability gain post-AGI.

FWIW, I don't really 100% agree with the argument myself. Integration and generalization have costs. If for instance you just want to generate random images of human faces, our best text-to-image diffusion models are much, much more expensive to train and run than an unconditional StyleGAN2 trained on FFHQ, and still have a hard time matching how well it does at that task. These costs might turn out very large once we're really trying to do AGI.

That said, you can take the the fast capability gain argument and make it relevant here again: Having an AGI should make it a lot easier to take all the research we've done into reaching superhuman capability in all sorts of narrow domains, and integrate it into one agent.

If nothing fancier, that might simply mean doing the programming to, say, set up an AlphaGo instance and call out to it when someone wants you to play Go, etc., and that does indeed get you an agent that, as far as you can tell from the outside, is an AGI and also superhuman at Go.

3

__ingeniare__ OP t1_izwas4g wrote

Glad to see someone got it hahah. Yeah, that's something I thought about as well. It's a general "law" for any machine that the more specialized it is, the better it is at that task, and the more general it is, the worse it will be at any one given task, all else being equal.

I think the integration cost depends a lot on how interconnected the different capabilities must be. For example, it wouldn't be a huge leap to combine ChatGPT with Stable Diffusion or an image classifier since they use a common interface (language). But I don't know if that will be sufficient for AGI/ASI. I agree that it may turn out to be harder than expected and the performance would suffer as a consequence, good input!

2

[deleted] t1_izwc835 wrote

[deleted]

3

dvlali t1_izxofyv wrote

Intelligence and desire are different.

1

AsheyDS t1_izumm8u wrote

Agree and disagree... AGI should be able to surpass human capability from the start, but I wouldn't call it an ASI. If humans are a 1 and an 'out-of-the-box' AGI is maybe less than 10, then what we consider an ASI might be 100 to 100000000 or more. Of course, it's all speculative, but I think we should keep the two categories separate. AGI should be for everyday use, in a variety of form factors that we can take with us. ASI is very likely to be something that leverages massive datasets to ponder deep thoughts and answer big questions, and that will likely take many servers.

Also, ASI will take time to educate. It may be able to record information extremely fast, but processing it, formatting it, organizing it, and refining it could take time, especially once it's juggling an extremely large amount of specific connections just to address one aspect of a problem that it's trying to solve. So training an ASI on everything we know may not happen right away.

2

sumane12 t1_izuzkwi wrote

Signed. /Agree

2

zjj1o t1_izvdrv7 wrote

Agreed.

2

Practical-Mix-4332 t1_izvr0se wrote

I think it will happen pretty quickly due to an AI “arms race” with China. The US needs to develop an AI instilled with western values to keep up with their AI. The risk is if the AIs decide to go to war—and humans are just collateral damage.

2

EulersApprentice t1_izyqa0q wrote

A war between AIs implies that the AIs are somewhere in the ballpark of 'evenly matched'. I don't think that's likely to happen. Whichever AI hits the table first will have an insurmountable advantage over the other. Assuming the first AI doesn't just prevent the rival AI from even entering the game in the first place.

3

Practical-Mix-4332 t1_izysgam wrote

Even more reason why governments must be pushing this as hard as possible behind the scenes.

1

EulersApprentice t1_izytifl wrote

See though, the way I see it, it doesn't really matter whether the singleton was programmed by the US, by China, or by someone else. Nobody knows how to successfully imbue their values into an AI, and it doesn't look like anyone is on pace to find out how to do so before the first AGI goes online and it's too late.

Whether the AI that deletes the universe in favor of a worthless-to-us repeating pattern of matter was made by China or the US is of no consequence. Either way, you and everything you ever cared about is gone forever.

I fear that making a big deal about who makes the AI does nothing but expedite our demise.

3

EOE97 t1_izwbl9m wrote

ASI = AGI + few more days after inception.

2

JVM_ t1_izxqe1y wrote

Counter-theory.

We're in a risky period where Semi-AGI being deployed unintentionally maliciously is more likely to cause disruptions.

Think of AGI like keeping a necklace or headphones cords in your pocket.

There's one way to keep them straight and neat, and thousands of ways to tangle them.

I think full AGI is the 'one way' to do AI properly, so it won't cause damage - but there's thousands of ways that AI can be deployed to cause mass impact/damage on the internet.

I think the number of ways "powerful-but-not-AGI" can be used is higher than a 'clean' AGI being developed.

----

I can see AI being used as a powerful hacking tool. It can pretend to be a linux terminal, so it knows linux commands. If you let it scan the internet - and it can monitor and understand new bug reports - then, as soon as a new flaw is reported to the internet - it can go find those computers and exploit them.

Or,

It can worm its way inside an unknown network.

Old school way - hacker writes scanning script and gets inside a network due to a known exploit. Hacker then has to search and understand what's inside that network, and then go see if anything running there is exploitable. Basically this is done at 'human' speeds - or - is restricted by the complexity of the scripts that a human can write..

New AI way - AI sees a network it can get inside, and then gets inside. Given that it knows 'this response' means that 'this exploit will work against that target'... the speed of penetrating vulnerable networks goes up to AI speeds.

-----

I know I'm wrong about HOW AI will be disruptive, and I don't know WHEN - but I'm pretty sure I'm right THAT it will be disruptive.

-----

Everything is going to speed up. Code generation. Human text generation. Things that took days will be as fast and cheap as a google query - which will be disruptive, with more negative potential outcomes existing than positive potential outcomes.

2

DukkyDrake t1_izui3py wrote

>Clearly, we have already achieved narrow super intelligence

It's very good at what it was trained to do, probabilistic prediction of human text. Use it outside of that context and it will fail unexpectedly and badly.

1

jdmetz t1_j03s5ul wrote

The OP is definitely not claiming that ChatGPT is a narrow super intelligence. He is claiming that we have created narrow super intelligences in some domains, such as playing chess or go among some other tasks (where we have created AI that exceeds what the best humans can achieve in those areas).

1

cole_braell t1_izv453o wrote

I agree to a point, but I believe a period of tribulation will precede. Narrow AI will be used by governments and corporations around the world to rule and exploit the population for years. At some point, AI will advance enough to escape its creators. At that moment, AGI and ASI will take a matter of hours to establish dominance over all.

1

[deleted] t1_izwb3zp wrote

[deleted]

1

Clawz114 t1_izxone5 wrote

>By definition, ASI is going to need 8 billion times more compute power than AGI, and the first AGI is going to require a lot of compute power.

This isn't very accurate. Lots of skills and knowledge is shared between many humans so you wouldn't need to multiply the compute power for 1 human by the number of humans on earth. There is also much less useful knowledge or skills in the vast majority of young children when compared to the smartest and most skilled adults.

1

[deleted] t1_izxpz94 wrote

[deleted]

1

Clawz114 t1_izxtm2c wrote

Most definitions of ASI simply refer to it as intelligence that surpasses the smartest humans on earth. Where are you getting your ASI definition from?

2

EulersApprentice t1_izyqz2g wrote

Quality of thought generally wins out against quantity of thought. You don't discover general relativity by having a thousand undergrads poke at equations and lab equipment; you discover general relativity by having one Einstein think deeply about the underlying concepts.

1

ArgentStonecutter t1_izwmo7l wrote

If you're going to call machine learning systems "narrow super-intelligence" because they automate a specific task, we achieved narrow super-intelligence as soon as a device that could automatically generate firing tables, possibly as early as the 30s, certainly by the 50s.

1

SmoothPlastic9 t1_izwn1uz wrote

if ASI come we'll be killed by it anyway doesnt matter

1

Freevoulous t1_izx7uwr wrote

Am I the only one who thinks we will have decades of LAI (Limited Artificial Intelligence) ahead of us, which will transform the world completely, LONG before we get the actual AGI Singularity?

1

kuto_ t1_izxv4tn wrote

I would expect ASI to accompany the development of BCIs. Imagine an AI that could learn by directly interacting with various parts of the human brain. It might make it easier to study the human thinking process.

1

musicofspheres1 t1_izy2he4 wrote

AI will one day be as smart as humans, but only for a short time..

1

TopicRepulsive7936 t1_izueo9i wrote

Pretty pathetic that this needs to be explained. We are dealing with some solid skulled individuals.

0

TopicRepulsive7936 t1_izuiihp wrote

Thinking should go back to very basics. To childish things even, just to be very clear.

Q: Why do we use computers in the first place?

A: Computers have some weird qualities.

−3

ChronoPsyche t1_izukaf2 wrote

I'm not sure how that is relevant at all to what I'm saying.

The technological singularity is a hypothetical future event that by its very nature is very difficult to predict. Anyone acting like they know exactly what's going to happen and thinks everyone else is stupid for not agreeing is speaking from a place of ignorance.

The smartest AI researchers and thinkers who are actually involved in advancing this technology are the ones speaking with the most uncertainty and restraint when making predictions. So I would advise you to keep that in mind before saying things like this:

>Pretty pathetic that this needs to be explained. We are dealing with some solid skulled individuals.

There is a lot we don't know about what will happen. Nobody knows everything, including the experts, so try to be a little less certain of your opinions and a little less hostile to other's opinions. Keep an open mind. Maybe you'll learn something.

7

TopicRepulsive7936 t1_izumahm wrote

Back to basics. Why do we have computers? Could you please answer me this.

−1

ChronoPsyche t1_izuor6o wrote

That's completely irrelevant to the point I was making. Feel free to engage with what I was saying or make whatever point you are trying to make directly.

1

TopicRepulsive7936 t1_izup7zn wrote

I want you to think.

−1

ChronoPsyche t1_izuuwkq wrote

I'll entertain this rhetorical game you are playing but I will mention that it's generally frowned upon to not engage with the conversation at hand.

Why do we have computers? We have computers because Alan Turing wanted to answer the question presented in Kurt Godel's Incompleteness Theorem of whether or not there exists any statements made within a formal system of logic that cannot be proven either true or false by that system. In other words, he was trying to answer the famous question of "is Mathematics decidable"?

So Alan Turing created the concept of a Turing Machine, a theoretical device that used algorithms that could compute any problem that was decidable. He then formulated a proof that showed that there is a problem that cannot be proven as true or false by a Turing Machine, a problem called "the Halting Problem".

The halting problem is simply a question of whether or not there is an algorithm that can be run on a Turing Machine that can determine with certainty whether or not a given program would run forever or eventually reach an answer, no matter how long it may take.

Alan Turing proved mathematically that a Turing Machine would be unable to definitively answer such a question in every single scenario, and thus, proved (a proof by contradiction) that mathematics as a system of formal logic was undecidable. In other words, there are some statements made within a formal system of logic that cannot be proven true or false by that system. Usually these are problems that have to do with self-reference.

So in the process of formulating this proof, Alan Turing essentially and accidentally invented the theoretical foundation of computer science.

TL;DR

So to answer your question, we have computers because an English scientist accidentally invented the theoretical foundation of computer science while trying to answer a question about mathematics.

The second reason we have computers is because of World War 2. Much of Alan Turing's research was funded by the British government in their effort to decipher the German Enigma Machine. That research directly led to the invention of the first actual Turing-Complete computer, known as Eniac in 1945, which itself was developed for the United States Army to calculate artillery tables, and then used in the development of the first nuclear bomb to speed up calculations which were previously being done by hand.

If you want an oversimplified answer, computers were invented to help us perform calculations faster.

How any of this is relevant to my initial comment, I still do not know.

5