Submitted by Neurogence t3_121zdkt in singularity
Some very interesting quotes from AGI researcher Ben Goertzel on the subject of GPT-4 and AGI.
>Non-AGI systems can possibly obsolete 80% of human jobs, and do tremendous good or harm for the world. However they cannot on their own lead to an Intelligence Explosion ... to get there we need systems smart enough to do highly-original cutting-edge engineering & science
>Looking at how GPT4 works, you'd be crazy to think it could be taught or improved or extended to be a true human level AGI. Looking at what it can do, you'd be crazy not to see that w some human creativity it's got to be usable to greatly accelerate progress to true HLAGI
>I don't think GPT4 shows "sparks of AGI" in a very useful sense (though given the lack of agreed definition of AGI it's not a totally insane statement). I do think it shows interesting aspects of emergence, which did not emerge in similar systems at smaller scale. It's cool.
>The main issue GPT4's "allegedly AGI-ish" properties raises: If this sort of fairly funky emergence comes from scaling up a "transformer NN ++", what kind of amazing wild-ass emergence will we see when we scale up AI architectures w/ more recurrence, reflection and abstraction?
Source: https://twitter.com/bengoertzel/status/1639378492562489344 I agree with Ben. I asked GPT4 for commentary and it also mostly agreed:
>The potential of GPT-4 to transform industries, as Goertzel acknowledges, could help fund and fuel the continued research and development of AGI. As more people become aware of the capabilities of AI systems, there may be a growing interest in pushing the boundaries of what AI can achieve, ultimately leading to the development of AGI.
>Goertzel's mention of emergence is a crucial point. The emergence of complex behaviors in AI systems as they scale up could provide insights into how intelligence arises in biological systems. This understanding could prove vital in the development of AGI, as it might help us build more biologically inspired AI architectures that mimic the ways in which human-level intelligence emerges.
>The need for AI architectures with more recurrence, reflection, and abstraction is essential in moving closer to AGI. GPT-4's limitations highlight the importance of integrating these aspects into future AI systems to enable more advanced forms of learning, reasoning, and decision-making.
>Lastly, while GPT-4 might not be AGI, it is essential to consider the ethical implications of developing increasingly advanced AI systems. As we move closer to AGI, we must ensure that we create systems that are aligned with human values and are designed to benefit society as a whole. It is crucial to engage in interdisciplinary discussions about the potential impact of AGI on society, addressing not just the technological aspects but also the social, economic, and political consequences.
acutelychronicpanic t1_jdpbkul wrote
You don't need an AI to be smarter than humans in order to get an intelligence explosion. You just need an AI that's better at AI design. This might be much easier.