Submitted by Gortanian2 t3_123zgc1 in singularity
ThePokemon_BandaiD t1_jdyqj9n wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2
if we reach human level AGI, why would it stop there? surely people will set AGIs on the task of self improvement and AI development.
Gortanian2 OP t1_jdythpp wrote
It seems obvious right? Just tell the AI to rewrite and improve its own code repeatedly, and it takes off.
As it turns out, recursive self-improvement doesn’t necessarily work like that. There might be limits to how much improvement can be made this way. The second article I linked gives an intuitive explanation.
ThePokemon_BandaiD t1_jdyuo2r wrote
That article is from 2017, and includes no understanding whatsoever of the theories and technology being used in current generative AI.
ThePokemon_BandaiD t1_jdytnho wrote
Humans are definitely not the theoretical limit for intelligence.
Gortanian2 OP t1_jdywit9 wrote
I agree with you. I’m only questioning the mathematical probability of an unbounded intelligence explosion.
Ok_Faithlessness4197 t1_jdz5m2l wrote
I just read the second article you linked, and it does not provide any scientific basis for the bounds of an intelligence explosion. Given the recent uptrend in AI investment, I'd give 5-10 years before an ASI emerges. Primarily, once AI takes over microprocessor development, it will almost certainly kickstart this explosion.
theotherquantumjim t1_jdz19vm wrote
I think AGI (depending on your definition) is pretty close already. As you’ve alluded to, we may never get ASI. I’m not sure that matters really. Singularity suggests a point where the tech is indistinguishable from magic e.g. nanotech, ftl travel etc. I don’t think we need that kind of event to fundamentally re-shape society, as others have said
Ok_Tip5082 t1_jdyufwc wrote
When you're in the elbow it's really hard to tell if the growth is logistic, exponential, or hyperbolic.
Viewing a single comment thread. View all comments