Submitted by flowday t3_10gxy2t in singularity
BadassGhost t1_j55vl62 wrote
Reply to comment by Ok_Homework9290 in AGI by 2024, the hard part is now done ? by flowday
I really struggle to see a hurdle on the horizon that will stop AGI from happening this decade, let alone in the next 3 years. It seems the only major problems is hallucination of truth and memory loss. I think both are solved by using retrieval datasets in a smart way.
ASI, on the other hand, definitely might be many years away. Personally I think it will happen this decade also, but that's less certain to me than AGI. Definitely possible that becoming significantly smarter than humans is really really difficult or impossible, although I imagine it isn't.
It will probably also be an extinction-level event. If not the first ASI, then the 5th, or the 10th, etc. Only way of humanity survival is if the first ASI gains a "decisive strategic advantage", as Nick Bostrom would call it, but uses that advantage to basically take over the entire world and prevent any new dangerous ASIs from being created
ArgentStonecutter t1_j563lt0 wrote
Self-improving AGI will be followed by ASI so quickly we'll be standing around like a pack of sheep wondering where the sheepdog came from.
BadassGhost t1_j5654od wrote
This is my guess as well, but I think it's much less certain than AGI happening quickly from this point. We know human intelligence is possible, and we can see that we're pretty close to that level already with LLMs (relative to other intelligences that we know of, like animals).
But we know of exactly 0 superintelligences, so it's impossible to be sure that it's as easy to achieve as human-level intelligence (let alone if it's even possible). That being said, it might not matter whether or not qualitative superintelligence is possible, since we could just make millions of AGIs that all run much faster than a human brain. Quantity/speed instead of quality
ArgentStonecutter t1_j56fhsa wrote
I don't think we're anywhere near human level intelligence, or even general mammalian intelligence. The current technology shows no signs of scaling up to human intelligence and there is fundamental research into the subject required before we have a grip on how to get there.
BadassGhost t1_j56i9dt wrote
https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks
LLMs are close to, equal to, or beyond human abilities in a lot of these tasks. Some of them, they're not there yet though. I'd argue this is pretty convincing that they are more intelligent than typically mammals in abstract thinking. Clearly animals are much more intelligent in other ways, even more so than humans in many different domains (e.g. chimps selecting 10 numbers on a screen in order from memory experiment). But in terms of high-level reasoning, they're pretty close to human performance
ArgentStonecutter t1_j56sxck wrote
Computers have been better than humans at an increasing number of tasks since before WWII. Many of these tasks, like Chess and Go, were once touted as requiring 'real' intelligence. No possible list of such tasks is even meaningful.
BadassGhost t1_j570h0y wrote
Then what would be meaningful? What would convince you that something is close to AGI, but not yet AGI?
For me, this is exactly what I would expect to see if something was almost AGI but not yet there.
The difference from previous specialized AI is that these models are able to learn seemingly any concept, both in training and after training (in context). Things that are out of distribution can be taught with a single digit number of examples.
(I am not the one downvoting you)
sumane12 t1_j567fqu wrote
I agree, short term memory, and long term learning, will avoid hallucinations, it does look like gpt3+ WolframAlpha seems to have solved this problem, although it's not a perfect solution, but will do for now.
I'm very much an immediate takeoff proponent when it comes to ASI. Not only can it think at light speed (humans tend to think at about the speed of sound) it has immediate access to the internet, it can duplicate itself over and over as long as there is sufficient hardware, and it's able to expand its knowledge infinitely expandable as long as you have more hard drive space.
With these key concepts, and again I'm assuming an agent that can act and learn like a human, I just don't see how it would not immediately super human in its abilities. It's self improvement might take a few years, but as I say, I just think it's ability to out class humans would be immediate.
Ok_Homework9290 t1_j568ayw wrote
>I really struggle to see a hurdle on the horizon that will stop AGI from happening this decade, let alone in the next 3 years.
To be honest, I base my predictions on the average predictions of AI/ML researchers. To my knowledge, only a minority of them believe we'll get there this decade, and even less in a mere 3 years.
>It seems the only major problems is hallucination of truth and memory loss.
As advanced as AI is today, it isn't even remotely close to being as generally smart as the average human. I think to close that gap, we would need a helluva lot more than than making an AI that is never spewing nonsense and can remember more things.
BadassGhost t1_j56atbp wrote
> To be honest, I base my predictions on the average predictions of AI/ML researchers. To my knowledge, only a minority of them believe we'll get there this decade, and even less in a mere 3 years.
I think there's an unintuitive part of being an expert that can actually cloud your judgement. Actually building these models and day-in-day out being immersed in the linear algebra, calculus, and data science makes you numb to the results and the extrapolation of them.
To be clear, I think amateurs who don't know how these systems work are much, much worse at predictions like this. I think the sweet middle ground is knowing exactly how they work, down to the math and actual code, but without being the actual creators whose day jobs are to create and perfect these systems. I think that's where the mind is clear to understand the actual implications of what's being created.
>As advanced as AI is today, it isn't even remotely close to being as generally smart as the average human. I think to close that gap, we would need a helluva lot more than than making an AI that is never spewing nonsense and can remember more things.
When I scroll through the list of BIG Bench examples, I feel that these systems are actually very close to human reasoning, with just missing puzzle pieces (mostly hallucination and long-term memory).
https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks
You can click through the folders and look at the task.json to see what it can do. There are comparisons to human labelers.
Aquamarinemammal t1_j57nkxi wrote
Just fyi, the condition that follows ‘let alone’ is usually the more conservative one. But I see plenty of hurdles, and I’m not convinced any of them can be overcome via scale or data-banks alone. Ability to remember and to distinguish truth from fiction are important, but LLMs also lack first-order logic and symbolic reasoning.
I think the last of these is going to be particularly tricky. I’m not aware of any substantial progress on abstraction for neural nets / ML in recent years; in fact, as I understand them, they seem fundamentally incapable of it. Giant functions / prediction machines just aren’t enough, and I struggle to see how people could think otherwise. This type of training detects concrete local patterns in the dataset, but that’s it - these models can’t generalize their observations in any way. Recurrent NNs and LSTMs maybe show some promise. I certainly wouldn’t get my hopes up that it’ll just be handed to us soon as an emergent property
BadassGhost t1_j5a7cip wrote
Fair, I should have swapped them!
What leads you to believe LLMs don't have first-order logic? I just tested it with ChatGPT and it seems to have a firm grasp on the concept. First-order logic seems to be pretty low on the totem pole of abilities of LLMs. Same with symbolic reasoning. Try it for yourself!
I am not exactly sure what you mean by abstraction for neural nets. Are you talking about having defined meanings of inputs, outputs, or internal parts of the model? I don't see why that would be necessary at all for general intelligence. It doesn't seem that humans have substantial, distinct, and defined meanings for most of the brain, except for language (spoken and internal). Which LLMs are also capable of.
The human brain seems to also be a giant function, as far as we can tell (ignoring any discussion about subjective experience, and just focusing on intelligence).
> This type of training detects concrete local patterns in the dataset, but that’s it - these models can’t generalize their observations in any way.
No offense, but this statement seems to really show a lack of knowledge about the last 6+ years of NLP progress. LLMs absolutely can generalize outside of the training set. That's kind of the entire point of why they've proved useful and why the funding for them has skyrocketed. You can ask ChatGPT to come up with original jokes using topics that you can be pretty certain have never been put together for a joke, you can ask it to read your code that has never been seen before and give recommendations and answers about it, you can ask it to invent new religions, etc etc.
These models are pretty stunning in their capability to generalize. That's the whole point!
TopicRepulsive7936 t1_j583aot wrote
GPTs are already superhuman.
Viewing a single comment thread. View all comments