Submitted by fortunum t3_zty0go in singularity
fortunum OP t1_j1g3rto wrote
Reply to comment by Sashinii in Hype bubble by fortunum
How does this address any of the points in my post though?
Extrapolating from current trends into the future is notoriously difficult. We could hit another AI Winter, all progress could end and a completely different domain could take over the current hype. The point is to have a critical discussion instead of just posting affirmative news and theory
Sashinii t1_j1g6o2q wrote
This Yuil Ban thread - Foundations of the Fourth Industrial Revolution - explains it best. While I recommend reading the entire thread, if you don't want to, here are some quotes:
"The Fourth Industrial Revolution is the upcoming/current one. And this goes into my second point: we won't know when the Fourth Industrial Revolution started until WELL after it's underway.
Next, "inter-revolutionary period" refers to the fact that technology generally progresses in inter-twining S-curves and right as one paradigm peaks, another troughs before rising. This is why people between 1920-1940 and between 2000 and 2020 felt like all the great technologies of their preceding industrial revolutions had given way to incremental iterative improvements and great laboratory advancements that never seemed capable of actually leaving the laboratory. If you ever wondered why the 2000s and 2010s felt indistinguishable and slow, as if nothing changed from 1999 to the present, it was because you were living in that intermediate period between technological revolutions. During that time, all the necessary components for the Fourth Industrial Revolution were being set up as the foundations for what we're seeing now while simultaneously all the fruits of the Third Industrial Revolution were fully maturing and perhaps even starting to spoil, with nothing particularly overwhelming pushing things forward. You might remember this as "foundational futurism."
As it stands, a lot of foundational stuff tends to be pretty boring on its own. Science fiction talks of the future being things like flying cars, autonomous cars, humanoid servant robots, synthetic media, space colonies, neurotechnology, and so on. Sci-fi media sometimes set years for these things to happen, like the 1990s or 2000s. Past futurists often set similar dates. Dates like, say, 2020 AD. According to Blade Runner, we're supposed to have off-world colonies and 100% realistic humanoid robots (e.g. with human-level artificial general intelligence) by now. According to Ray Kurzweil, we were supposed to have widespread human-AI relationships (ala Her) and PCs with the same power as the human brain by 2019. When these dates passed and the most we had was, say, the Web 2.0 and smartphones, we felt depressed about the future.
But here's the thing: we're basically asking why we don't have a completed 2-story house when we're still setting down the foundation, a foundation using tools that were created in the preceding years.
We couldn't get to the modern internet without P2P, VoIP, enterprise instant messaging, e-payments, business rules management, wireless LANs, enterprise portals, chatbots, and so on. Things that are so fundamental to how the internet circa 2020 works that we can scarcely even consider them individually. No increased bandwidth for computer connections? No audio or video streaming. No automated trading or increased use of chatbots? No fully automated businesses. No P2P? No blockchain. No smartphones or data sharing? No large data sets that can be used to power machine learning, and thus no advanced AI.
Finally and a bit more lightheartedly, I'd strongly recommend against using this to predict future industrial revolutions unless you're writing a pulp sci-fi story and need to figure out roughly when the 37th industrial revolution will be underway. If the Fourth Industrial Revolution pans out the way I feel it will, there won't be a Fifth. Or perhaps more accurately, we won't be able to predict the Fifth, specifically when it'll take place and what it will involve."
pre-DrChad t1_j1gb5r9 wrote
Great explanation!
Chad_Nauseam t1_j1gkrkj wrote
if there’s a 10% chance that existing trends in AI continue, its the only news story worth covering. It’s like seeing a 10% chance of aliens heading towards earth.
lovesdogsguy t1_j1iig2g wrote
Reminds me of that Stephen Hawking quote about AI. I'm paraphrasing here, but it's something like,
"if Aliens called tomorrow and said, hey btw, we're on our way to Earth, see you in about 20 years, we wouldn't just say, 'ok great,' and then hang up the phone and go back to our routine. The entire world would begin to prepare for their arrival. It's the same with AI. This alien thing is coming and nobody's preparing for it."
I think his analogy is very succinct.
Ortus12 t1_j1gg2ws wrote
The last Ai winter was caused by insufficient compute. We now have sufficient compute, and we've discovered that no new algorithmic advances are necessary, all we have to do is scale up compute for existing algorithms and intelligence scales along with it.
There are no longer any barriers to scaling compute because internet speeds are high enough that all compute can be server farms that are continually expanded. Energy costs are coming down towards zero so that's not a limiting factor.
The feedback loop now is Ai makes money, money is used for more compute, Ai becomes smarter and makes more money.
The expert systems of the 80s and 90s, grew too complex for dumb humans to manage. This is no longer a bottleneck because again, all you have to do is scale compute. Smart programmers can accelerate that by optimizing, and designing better data curation systems but again it's not even necessary. It's now a manual labor job that almost any one can be hired to do (plugging in more computers).
GuyWithLag t1_j1hgj0h wrote
Dude, no. Listen to the PhDs - the rapture isn't near, not yet at least.
On a more serious note: This is what the OP refers to when talking about a "hype bubble". The professionals working in the field actually know that the current crop of AI models are definitely not suitable for the architecture of AGI, except maybe as components thereof. Overtraining is a thing, and it's also shown that overscaling is also a thing. Dataset size is king, and the folks that create the headline-grabbing models already fed the public internet to the dataset.
From a marketing standpoint, there's the second-mover advantage: see what other did, fix issues and choose a different promotion vector. You're looking at many AI announcements in a short span due to the bandwagon effect, caused by a small number of teams showing multiple years' worth of work.
lil_intern t1_j1hnp2k wrote
If by rapture you mean evil robots taking ppl out their house then yes but what about millions of peoples careers becoming obsolete over night every other month due to AI growth in unexpected fields that seems pretty close
Ortus12 t1_j1hzcoy wrote
The current popular Ai models are only what works best on the current hardware.
We've already designed tons of different models that are outlined in many older Ai books, that can be used as compute scales (as Ai companies make more money to spend on more compute). Even the current models weren't invented recently, they're just now applicable because the hardware is there.
There's been a few algorithmic optimizations along the way a larger portion of the scale has been hardware.
2nd order companies are taking out 1st order companies by improving things, but that still keeps the ball moving forward.
ThePokemon_BandaiD t1_j1ipluc wrote
First of all, current big datasets aren't the full internet, just large subsections, specific datasets of pictures or regular text. We also generate about 100 zettabytes of new data on a yearly basis as of this year, and generative models can, with the help of humans to sort it for value for now, generate their own datasets. And while currently available LLMs and Image recognition and generation models are still quite narrow, stuff like gato, flamingo, etc have shown that at the very least multimodal models are possible with current tech, and imo it’s pretty clear that more narrow AI models could be combined together to create a program that acts as an AGI agent.
YesramDeens t1_j1jzcgo wrote
> Listen to the PhDs - the rapture isn't near, not yet at least.
Stop with this misinformation; for every three PhDs that are saying we will have an AI winter, there are six AI researchers at companies like OpenAI and Deepmind that are extremely excited about the potential of the devices they are creating.
Your unnecessary doomerism is borne from a sense of superiority and arrogance in knowledge. Don’t be humbled later on.
Krillinfor18 t1_j1hetv3 wrote
The poster addressed both of your points.
Your points seem to be:
1:People you've meet in the ML field don't talk much about AGI.
2: You don't believe that LLMs will lead to an AGI or a singularity.
This poster is saying that neither of those things matter if the trend of exponential technological growth continues. Technological growth will progress in a rapid and nonintuitive fashion such that things that seem possible in the next few hundred years could occur in just the next few decades.
It's true that trend is not guaranteed to continue, but it seems unlikely (at least in my mind, and clearly in others) that even significant economic or societal shifts could alter it's course.
AndromedaAnimated t1_j1hrdd5 wrote
THANK YOU!
I love how you show that OP is not giving ANY arguments for ANY critical discussion except his religion (which is: „I don’t belieeeeeeve in AGI“ which is equally insane as „I belieeeeeeve in AGI“).
[deleted] t1_j1g5br4 wrote
[deleted]
fortunum OP t1_j1g63rc wrote
See the big shiny things we see in “AI” today are driving by a single paradigm change at the time, think convolutions for image processing and transformers for LLM. Progress could come from new forms of hardware (as it tends to btw, more so than actual algorithms) like we started using GPUs. The current trend shows that it makes sense to build the hardware more like we build the models (neuromorphic hardware), this way you can save orders of magnitudes of energy and compute so that it operates more like the brain. This is only an example of what else could happen, it could also be that language models stop improving as we are nearing the limit of language data apparently.
DaggerShowRabs t1_j1hn4iy wrote
An actual AI winter at this point is about as likely as society instantaneously collapsing.
An AI winter is not an actual, valid concern for anyone in the industry for the forseeable future.
I get wanting to have a critical discussion about this, but then when someone talks about exponential growth, you need to do better than parroting a talking point that mainstream journalists who have no idea what they are talking about spew out.
I'm all for critical discussion, but talking about another actual AI winter like the 70s or early 2000s is kind of a joke. I'm really surprised anyone with even a little bit of knowledge of what is going on in the industry would say something this out-of-touch.
And none of that is to say AGI is immenent, just that an AI winter is literally the most out-of-touch counterpoint you could possibly use.
AndromedaAnimated t1_j1hr2l2 wrote
You are not the master of this subreddit 🙄 why does everyone think they can decide what others talk about?
eve_of_distraction t1_j1isyex wrote
They don't. There is an extremely obnoxious and noisy minority, and a mostly silent majority.
Viewing a single comment thread. View all comments