Submitted by 420BigDawg_ t3_107ve7y in singularity
keefemotif t1_j3pjkt6 wrote
Reply to comment by AsheyDS in "Community" Prediction for General A.I continues to drop. by 420BigDawg_
What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again. I think psychologically, 10 years is about the level people have a hard time imagining past, but still think is pretty close. For most adults, 20-25 years isn't really going to help their life, so they pick 10 years.
As far as the crowdsource comment, yikes. We aren't out there crowdsourcing PhDs and open heart surgery. I know there was that whole crowdfarm article in communications of the ACM and I think that is more degradation of labor rights than value in random input.
coumineol t1_j3pxmr2 wrote
>What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again.
May be true for "the people you know", but if you look at the general opinion of people interested in this field, the predictions used to start at the 2040s just last year.
keefemotif t1_j3qzov0 wrote
While selection bias is already a thing, I'm pretty sure "the people I know" being generally software engineers with advanced degrees and philosophers into AI... it's a pretty educated opinion on the bias.
coumineol t1_j3r24vc wrote
In that case maybe educated opinion is worse than the wisdom of the crowds, as the community prediction for AGI was 2040 last year as you can see from the post which is not "10 years away".
keefemotif t1_j3rsn2g wrote
It's 18, the point I'm making is we have a cognitive bias towards 10-20 years or so when making estimates and we also have a difficult time understanding nonlinearity.
The big singinst hypothesis was there would be a "foom" moment where we go to super exponential progression. From that point of view, you would have to start talking probability distribution of when that nonlinearity happens.
I prefer stacked sigmoidal distributions, where it goes exponential for a while, hits some limit (think Moore's and around 8nm)
Training a giant neural net towards language models is a very important development, but I mean imho AlphaGo was more interesting technically with the combination of value and policy networks, vs billions of nodes in some multilayer net.
Viewing a single comment thread. View all comments