Submitted by AdditionalPizza t3_y98hxs in singularity
This (long) post may be controversial, and I'm sure many will disagree.
We know humans have a hard time imagining our technological progress at an exponential rate. Go talk to anyone in the general public, explain AGI to them, and they'll say it's 50-100 years away minimum. But I think even those of us that have committed to trying to think exponentially about the rate of technological progress are still guilty of linear thought.
Imagine a black hole. The singularity is the center, we can imagine that is the infamous 2045 date (or any date you predict, doesn't matter). That's most likely to be the moment AI is able to self improve at such a rapid pace that the technology it produces is quite literally impossible for us to predict before it happens. We can try and predict when it happens, but the very nature of the singularity will result in something we cannot predict. We will leave that alone for this discussion. As observers outside of the black hole, we can witness the event horizon as the point when time stops (to us, the observers). While this comparison is just a comparison and not totally relevant, it does help exhibit the inability of humans to think exponentially. I'm arguing this is even the case for those of us in this sub that claim to be able to believe we can think more exponentially.
To see our own mistakes when predicting exponentially, we need to pick a date in the future and go backwards. The exact date doesn't exactly matter. But for the sake of argument let's choose 2025 because it's convenient and things align nicely.
I choose 2025 because 2.5 years prior to it (ie. right now, give or take) we cannot predict with much certainty what Large Language Models will be even remotely capable of. We have no idea what scaling is going to truly produce at this point. Some are saying AGI intellect is possible with an LLM, I'm not going to debate that though.
2020 (-5 years) we had LLM's but thought we needed something else. In 2.5 years we figured out scaling most likely works. There could still be more, but we have no idea at this point for sure.
2015 (-10 years) we were in the Machine Learning game, with no real idea of the implications LLM's could bring to the table yet. We thought true artistry would be the last endeavor an AI could conquer.
2005 (-20 years) we were mostly still pre-smartphone, really getting a sense of how the internet was changing society. We could predict 20 years in the future based on an exponential curve, but we didn't know how or what.
1985 (-40 years) who cares, whatever.
The point being, as we get further up the curve of exponential progress, I believe we're at a point it's important to factor in short term exponential growth far more into consideration. When we say 10 years from today, we probably really mean 5. Often when we predict 5 years from today, it's just a cop out. We are protecting our linear instincts and not fully leaning into the exponential rate of progress. Our guts tell us 5 years, but it's likely 2.5 years. In 2025, what feels like 5 years TODAY (2022) will be 1.25 years. It's much easier to take the exponential rate of progress with longer timeframes. What's really the difference between 20 years and 30 years really. But we get very protective of our instincts when it comes to the short term (<10 years). Exponential function doesn't care what size the number is.
I'm not saying at this rate the singularity will come sooner. I'm saying 5 years of progress in 2020 is equal to 2.5 years in 2022. Or 10 years of progress in 2015 is 2.5 years of progress in 2022.
Of course I'm not saying these are the dates and timeframes that are set in stone. I just chose 2025 based off of what I listed above, an argument could be made for any date really. But the point stands, our brains try really hard to stop thinking exponentially when it comes to the upshot of an exponential curve. I do think the real world can slow a lot of things down, like logistics/manufacturing etc. But I think by 2025 the only thing that will be holding a take off back will literally be humans and our slow bodies. Everything else will be ready to go waiting for us to get it on the shelf.
​
>I'm not saying at this rate the singularity will come sooner. I'm saying 5 years of progress in 2020 is equal to 2.5 years in 2022. Or 10 years of progress in 2015 is 2.5 years of progress in 2022.
So if those numbers aren't set in stone, then wtf am I talking about here?
With big tech further tackling things like Codex, this will have a cascading effect across every single sector that involve Information Technology. Programmers are the foundation of IT. Forget the argument of whether or not programming will be fully automated anytime soon, it doesn't matter. What matters is programmers being 10% more efficient = every other industry reaping that acceleration of progress. Skip ahead another couple years or maybe a couple months when programmers are now doubling their productivity, now every other industry is receiving that 100% boost to productivity through new more efficient software and stronger AI. I believe the tipping point for Transformative AI is happening right now, within the year.
What does this mean exactly? Transformative AI (TAI) is so much more important than anyone is giving it credit for. TAI will most likely lead to AGI. TAI takes us to the event horizon and beyond. We are on the cusp, and this sounds overly optimistic I know. Those waiting for AGI and wishing it would come faster, you don't need to. TAI is already here. It began with LLM's, and the progress between 2025 until AGI will be greatly accelerated because of TAI. 2025-AGI is already going to blow our minds. It probably won't be the sci-fi stuff everyone here is all about when talking about the singularity and such; it will be the world shaking, policy changing, employment shattering stuff.
The disruptions will start coming within 2.5 years (2022 time).
phriot t1_it486pz wrote
I think you're correct in thinking that AI disruption of our lives is here, and will only ramp up in the coming years - even without getting to AGI.
That said, I'm very confident that the shape of our lives will be very similar to today in 2025. Most people will still have jobs. Most people will still carry smartphones. Most car owners will still be the ones driving. Etc. (And you do say something like this towards the bottom of your post.)
Basically, even if next-gen narrow AI expert systems on better hardware are exponentially better by 2025, the timeframe is still so short as to appear linear with respect to impact on people's lives.