Submitted by Neurogence t3_121zdkt in singularity

Some very interesting quotes from AGI researcher Ben Goertzel on the subject of GPT-4 and AGI.

>Non-AGI systems can possibly obsolete 80% of human jobs, and do tremendous good or harm for the world. However they cannot on their own lead to an Intelligence Explosion ... to get there we need systems smart enough to do highly-original cutting-edge engineering & science

>Looking at how GPT4 works, you'd be crazy to think it could be taught or improved or extended to be a true human level AGI. Looking at what it can do, you'd be crazy not to see that w some human creativity it's got to be usable to greatly accelerate progress to true HLAGI

>I don't think GPT4 shows "sparks of AGI" in a very useful sense (though given the lack of agreed definition of AGI it's not a totally insane statement). I do think it shows interesting aspects of emergence, which did not emerge in similar systems at smaller scale. It's cool.

>The main issue GPT4's "allegedly AGI-ish" properties raises: If this sort of fairly funky emergence comes from scaling up a "transformer NN ++", what kind of amazing wild-ass emergence will we see when we scale up AI architectures w/ more recurrence, reflection and abstraction?

Source: https://twitter.com/bengoertzel/status/1639378492562489344 I agree with Ben. I asked GPT4 for commentary and it also mostly agreed:

>The potential of GPT-4 to transform industries, as Goertzel acknowledges, could help fund and fuel the continued research and development of AGI. As more people become aware of the capabilities of AI systems, there may be a growing interest in pushing the boundaries of what AI can achieve, ultimately leading to the development of AGI.

>Goertzel's mention of emergence is a crucial point. The emergence of complex behaviors in AI systems as they scale up could provide insights into how intelligence arises in biological systems. This understanding could prove vital in the development of AGI, as it might help us build more biologically inspired AI architectures that mimic the ways in which human-level intelligence emerges.

>The need for AI architectures with more recurrence, reflection, and abstraction is essential in moving closer to AGI. GPT-4's limitations highlight the importance of integrating these aspects into future AI systems to enable more advanced forms of learning, reasoning, and decision-making.

>Lastly, while GPT-4 might not be AGI, it is essential to consider the ethical implications of developing increasingly advanced AI systems. As we move closer to AGI, we must ensure that we create systems that are aligned with human values and are designed to benefit society as a whole. It is crucial to engage in interdisciplinary discussions about the potential impact of AGI on society, addressing not just the technological aspects but also the social, economic, and political consequences.

134

Comments

You must log in or register to comment.

acutelychronicpanic t1_jdpbkul wrote

You don't need an AI to be smarter than humans in order to get an intelligence explosion. You just need an AI that's better at AI design. This might be much easier.

42

Fluglichkeiten t1_jdq2i5n wrote

Yeah, exactly this. It doesn’t necessarily need to be a general intelligence. The question then is; are any of the current AI models better than humans at the specific skills required to make AIs?

I don’t know the answer. I suspect not, but it feels like we’re not too far away. Current models seem to have achieved a kind of ‘creativity’ and can be linked with other systems to shore up their deficiencies (such as maths). Maybe if one of the larger models was trained specifically to work on AI design… although how would that look? Feed an LLM lots of academic papers paired with real world implementations?

I’d be interested to see what the big labs have cooking behind the scenes.

10

acutelychronicpanic t1_jdqrppa wrote

Probably not? At least not any public models I've heard of. If you had a model architecture design AI that was close to that good, you'd want to keep the secret sauce to yourself and use it to publish other research or develop products.

LLMs show absolutely huge potential for being a conductor or executive that coordinates smaller modules. The plug-ins coming to ChatGPT are the more traditional software version of this. How long until an LLM can determine it needs a specific kind of machine learning model to understand something and just cooks up and architecture and can choose appropriate data?

2

lehcarfugu t1_jds352j wrote

It seems like they are capped out by the data they receive, so by their nature they are going to be as smart as the collective human race, but not smarter. I think it's unlikely the singularity comes from this current approach.

1

DixonJames t1_jdqlhou wrote

yes. key to chat GPT is that it can't improve itself - the engineers need to release a new version. When we have an evolving AI, that's when we are really running fast.

2

acutelychronicpanic t1_jdquwei wrote

That sounds absolutely terrifying, please don't. We'd just be handing the reins off to chance and hoping.

2

maskedpaki t1_jdojju1 wrote

ben goertzel will be an LLM denier forever. No matter how much progress LLMs make and how little progress his own pathetic opencog venture makes. He is best ignored I think.

19

Neurogence OP t1_jdolina wrote

I've been reading his writings and books for over a decade. He is extremely passionate about AGI and the singularity. His concern is that by focusing too heavily on LLMs, the AI community might inadvertently limit the exploration of alternative paths to AGI. He wants a more diversified approach, where developers actively explore a range of AI methodologies and frameworks, instead of putting all their eggs into the LLM basket, to guarantee that we can be successful in creating AGI that can take humanity to the great above and beyond.

40

fastinguy11 t1_jdonvci wrote

Do not worry then in just a few years we will have very big sophisticated improved LLMs with multi-modality(images and audio), if by then AGI is not here i am sure other venues will be explored. But wouldn't it be great if that is all it took ?

20

maskedpaki t1_jdom225 wrote

Those "other paths" have amounted to nothing

That is why people focus on machine learning. Because it produces results and as far as we know it hasn't stopped scaling. Why would we bother looking at his logic graphs that have produced fuck all for the 30 years he has been drawing them ?

19

UK2USA_Urbanist t1_jdoo8rm wrote

Well, machine learning might have a ceiling. We just don’t know. Everything gets better, until it doesn’t.

Maybe machine learning can help us find other paths that succeed it’s limits. Or maybe it too hits roadblocks before finding the real AGI/ASI route.

There is a lot of hype right now. Some deserved, some perhaps a bit carried away.

20

Villad_rock t1_jdpy2g5 wrote

Evolution showed there aren’t really different pathways to higher intelligence. Both vertebrate and invertebrates lead to high intelligence and devolution is hard or impossible, so the evolutionary brain would have been extremely lucky to get in the right direction two times just by luck and both seem to be basically the same. This leads me to believe there is only one way which can be build upon.

1

Ro1t t1_jdqc2kl wrote

No it doesn't at all, that's just how it's happened for us. Equivalent to saying the only way to store heritable information is through DNA, only way to store energy is carbs and fat. We literally just don't know.

5

lehcarfugu t1_jds3i46 wrote

They had a common descendent so I don't think it's reasonable to assume this is the only way to reach higher intelligence. Your sample size is one (planet)

1

Neurogence OP t1_jdonkja wrote

At some point, LLM's did not work because we did not have the computing power for it. The alternative approaches will probably lead to AGI. The computing power just might not be here yet.

6

maskedpaki t1_jdout9t wrote

"At some point LLMS did not work"

I'm sorry are you a time traveller ?

How do you know this ? GPT4 scaled above gpt3 and AI compute is still rising rapidly.

−5

FoniksMunkee t1_jdputbl wrote

Even MS are speculating that LLM alone are not going to solve some of the problems they see with ChatGPT's ability to reason. ChatGPT has no ability to plan, or to solve problems that require a leap of logic. Or as they put it, the slow thinking process that overseas the fast thinking process. They have acknowledge solutions proposed by other authors that have recognised the same issue with LLM's have suggested a different architecture may be required. But this seemed to be the least fleshed out part of the paper.

5

AsheyDS t1_jdov1ik wrote

Symbolic failed because it was difficult for people to come up with the theory of mind first and lay down the formats and the functions and the rules to create the base knowledge and logic. And from what was created (which did have a lot of use, so I wouldn't say it amounted to nothing) they couldn't find a way to make it scale, and so it couldn't learn much or independently. On top of that, they were probably limited by hardware too. Researchers focus on ML because it's comparatively 'easy' and because it has produced results that so far can scale. What I suspect they'll try doing with LLMs is learning how they work and building structure into them after the fact, and finding that their performance has degraded or can't be improved significantly. In my opinion, neurosymbolic will be the ideal way forward to achieve AGI and ASI, especially for safety reasons, and will take the best of both symbolic and ML, and together helping with the drawbacks to both.

4

maskedpaki t1_jdoyj5e wrote

I've been hearing the Neuro-symbolic cheerleading for 5 years now. I remember Yoshua bengio once debating against it and seeming dogmatic about his belief in pure learning and in how neurosymbolic systems wont solve all the limitations that deep learning has. I have yet to see any results and don't expect to see any. My guess is that transformers continue to scale for 5 more years at least and we will stop asking questions then about what paradigm shift needs to take place because it will be obvious that the current paradigm will do just fine.

5

Zer0D0wn83 t1_jdp68ky wrote

Exactly this. 10x the ability of GPT-4 may not be AGI, but to anyone but the most astute observer there will be no practical difference.

7

footurist t1_jdq23y4 wrote

I'm baffled neurosymbolic hasn't been attempted with a huge budget like OpenAI. You've got these two fields, with one you see it can work really precisely but breaks down at fuzziness, scaling and going beyond the rules. With the other you get almost exactly the opposites.

It seems like such a no brainer to make a huge effort trying to combine these in large ways...

2

DragonForg t1_jdp3eem wrote

LLMs are by there nature teathered to the human experience, by the second letter. Language. Without language AI can never speak to a human, or a system in that matter. Create any interface you must make it natural so humans can interact with it. The more natural the easier it is to use.

So LLMs are the communicators, they may not do all the tasks themselves, but they are the foundation to communicate with other processes. This can be done by nothing other than something trained entirely to be the best at natural language.

11

[deleted] t1_jdt2i8n wrote

Because machines don’t speak to humans through 1s and 0s? C’mon.

1

[deleted] t1_jdt2eug wrote

The AI community isn’t going to get to AGI without the financial backing the non-AI community. In that context it makes more sense to deploy a commercially successful LLM.

2

GoldenRain t1_jdr38ub wrote

Even Openai says LLM are unlikely to be the path to AGI.

3

maskedpaki t1_je2lgig wrote

Ilya sutskever literally believes that next word prediction is general purpose so you are just wrong on this.

The only thing he is unsure about is if something more efficient than next token prediction gets us there first. It's hard to defend Gary marcus' view that gpt isn't forming real internal representations since we can see that gpt4 so obviously is.

1

vivehelpme t1_jdqalhs wrote

He shares the trait of several other "futurologist experts". Huge ego, long winded essays articles saying nothing at all.

They keep milking their fantasy AI cargo cult ecosystem for money and attention by pretending they are involved with the real world.

2

No_Ninja3309_NoNoYes t1_jdprlpp wrote

80% sounds like a wild stab. I second that current systems are not original. Sure, they can stumble on something unique, but anyone can if they try hard enough. And computers can combine items faster than we can. Some of the combinations might be meaningful, but AI doesn't really know because they have no model of the world.

I don't think we can say much about GPT 4 because OpenAI is secretive about it. But it can't be AGI unless OpenAI invented something extraordinary. If they did, they would be fools to expose it to the world just like that.

It looks like he's talking about neurosymbolic systems or RNNs. IMO we need spiking neural networks hardware. The architecture would probably be something novel that we don't even have a name for yet.

1

CaliforniaMax02 t1_jdr84pj wrote

I think if it doesn't replace 80% of the jobs, but 20-30%, our society will already be in serious trouble. And 20-30% is quite believeable.

5

Borrowedshorts t1_jdqyly5 wrote

80% is a wild stab just as any projection is a wild stab, but Goertzel has studied the problem as much as anyone.

1

lehcarfugu t1_jds3olv wrote

On the other hand, most progress and advancement is combining ideas. What combinations have we not considered?

1

zeneggerschwarz t1_jdpyrvi wrote

I have no doubt that AI, and advanced robotics, will obsolete 98%+ of human jobs in the next 50 years, and then the rest in the following decades.

1

HumpyMagoo t1_jdqj3cl wrote

GPT4, make a better version of yourself. GPT4, after you make a better version of yourself I will hardwire multiple machines together and also have Virtual Machines and other devices linked, All of you GPT4s work together using combined computing power of all devices to make a better overall version while also upgrading yourselves and recruit more devices through bots online and merge.

1

datsmamail12 t1_jdqlmzf wrote

Everyone is suddenly talking about sparks of AGI,even if we do or don't have it yet doesn't matter. What matters is that we are one step behind achieving it,which is a crazy thing to think of. Some people were so bold on their statements that we might never get to have AGI that were willing to even bet money on it. But here we are in 2023 hearing from different people that AGI is near. Incredible times!

1

KaptainSaw t1_jdqw8tv wrote

Well GPT-4 can reason to some extent and give nuanced answers about controversial topic and pass human exams that was not there in GPT-3. If thats not proto-AGI then i dont know what is. Sam altman also says they are focused on it being used as a reasoning engine. LLM might not be the only thing we need to do to achieve AGI but it certainly a huge step in that direction.

1

Smellz_Of_Elderberry t1_jdrz522 wrote

I've been following goertzel for a while.. I think he is right about agi, and about llms being able to automate a large portion of human work...

I think llms could actually lead to some kind of limited artificial superintelligence... An intelligence that will allow for the creation of new truly sentient ai.

1

Lawjarp2 t1_jdp9zi8 wrote

That's actually pretty bad. This means there will be a gap between when AGI arrives and when most jobs could get replaced.

0

Sigma_Atheist t1_jdpmt2u wrote

Which is the most likely outcome imo

And it spells disaster.

−3

banuk_sickness_eater t1_jdvbu34 wrote

Doomer.

1

Sigma_Atheist t1_jdyb251 wrote

No it's a real issue. If these LLMs aren't good enough to replace all jobs, but do replace a lot, then there will be mass unemployment and rioting.

1