Submitted by flowday t3_10gxy2t in singularity
BadassGhost t1_j56atbp wrote
Reply to comment by Ok_Homework9290 in AGI by 2024, the hard part is now done ? by flowday
> To be honest, I base my predictions on the average predictions of AI/ML researchers. To my knowledge, only a minority of them believe we'll get there this decade, and even less in a mere 3 years.
I think there's an unintuitive part of being an expert that can actually cloud your judgement. Actually building these models and day-in-day out being immersed in the linear algebra, calculus, and data science makes you numb to the results and the extrapolation of them.
To be clear, I think amateurs who don't know how these systems work are much, much worse at predictions like this. I think the sweet middle ground is knowing exactly how they work, down to the math and actual code, but without being the actual creators whose day jobs are to create and perfect these systems. I think that's where the mind is clear to understand the actual implications of what's being created.
>As advanced as AI is today, it isn't even remotely close to being as generally smart as the average human. I think to close that gap, we would need a helluva lot more than than making an AI that is never spewing nonsense and can remember more things.
When I scroll through the list of BIG Bench examples, I feel that these systems are actually very close to human reasoning, with just missing puzzle pieces (mostly hallucination and long-term memory).
https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks
You can click through the folders and look at the task.json to see what it can do. There are comparisons to human labelers.
Viewing a single comment thread. View all comments