Submitted by karearearea t3_127dn62 in singularity
GPT-4 shows 'sparks of AGI', and if GPT-5 isn't AGI then surely GPT-6 will be. However, I'm not convinced GPT-7 will be much smarter.
I was thinking about the dataset the GPT models are trained on - the entirety of the internet and all of human writing - and trying to think what the limit of a model trained on that dataset would be. Current machine learning practices train on a training set and use a test set to prevent overfitting and get the model to generalise to the domain of the training data. The GPT models are trained on human-intelligence-level text. What would perfect generalisation to that training data look like? I believe it would mean the model would be able to replicate any text that could conceivably be written by human-level intelligences.
That means it would be basically human expert level on every subject. That will be a paradigm shift for society, with everyone being able to consult an expert for almost free about any subject. The nature of many jobs and industries will also obviously change, with AI being able to speed up people's jobs by huge amounts, and replace many others. A model that is human-level expert at every subject is already a 'super-intelligence' compared to the average human, but not in a singularity-inducing way. Human experts would still be as smart as the AGI in their area of expertise, and there are thousands of human experts in every subject.
But for the AGI to get smarter than expert humans using the current approach to training the GPT models, it would need to be trained on a body of text that had more knowledge than them. If we had text written by a super-intelligence, I have no doubt that with a large enough model and compute the GPT architecture could be trained to super-intelligence.
But we don't have that dataset. I don't think that dataset is going to be easy to produce either. The AGI will be able to come up with almost infinite scientific hypothesis, but each one will still take large, expensive experiments to verify, and it will likely be wrong often, same as us humans. So creating an ASI via the methods used to create AGI is not going to be easy, as we lack the necessary terabyte of text written by a super-intelligence. Could we get there with other methods, like reinforcement learning? Maybe, but we haven't figured out a reinforcement learning agent that's gotten anywhere close to AGI, so it would require an entirely different approach to the one used to create GPT-4.
I think we'll get there eventually, but I don't think it will be just after AGI is created. I think it will be many years after and a gradual process, at least if the current approach is used.
tl;dr: GPT is trained on a dataset consisting of human-level-intelligence text. I think that means if you scale it up as much as possible, the limit is perfect ability to create human-level-intelligence text. If we had super-intelligence-level text, we could train it to be super-intelligent - but we don't, and I don't think it will be easy to create.
ItIsIThePope t1_jedrsmk wrote
Well that's why AGI is a cornerstone for ASI, because if we can get to AGI that is an AI capable of human intelligence only with far superior processing power and thinking resource in general, it would essentially advance itself to become super-intelligent.
Just as how expert humans continuously learn and get smarter through knowledge gathering (scientific method etc.) an AI would learn, experiment and learn some more, only this time, with far far greater rate and efficiency
Humans now are smarter than humans then because of our quest for knowledge and developing methods of acquiring them, AGI will adhere to the same principles but boost progress exponentially