Submitted by Gortanian2 t3_123zgc1 in singularity
I see discussions on this sub that would lead people to cash out their 401k’s and sit on their hands waiting for ASI to save or enslave us all.
Guys, the singularity hypothesis is just that: A hypothesis. There are some very well-made arguments out there against the plausibility of a hard-takeoff.
A week ago I was having the same thoughts you are. How impossibly lucky am I to be alive during the invention of AGI and the intelligence explosion that will follow? What will I do with myself after nuclear fusion, the cure for aging, and interstellar travel are solved? Should I be worried about AI enslaving or eradicating humanity?
While all of those things would be wonderful (or terrible), it’s important for us to recognize the possibility that it might not happen in our lifetimes, if ever.
At the very least, you should read the articles below in their entirety before blowing your kids’ college funds on Nvidia stock.
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
I would love to read counter arguments to these. If we can’t come up with good explanations for the contrary, it should raise a red flag. And if this post gets downvoted to hell and we are unable to foster these types of debates as a community, then we are effectively treating the singularity hypothesis as a religion.
Ghostof2501 t1_jdx5kxy wrote
Look, I’m not here to be rational. I’m here to be sensationalized.