Submitted by Liberty2012 t3_11ee7dt in singularity
JVM_ t1_jads1v0 wrote
I don't think we can out-think the singularity. Just like a single human can't out-spin a ceiling fan, the singularity will be fast enough to be beyond humans containment attempts.
What happens next though? I guess we can try to build 'friendly' AI's that tend toward not ending society, but I don't think true containment can happen.
Liberty2012 OP t1_jadvgev wrote
I tend to agree, but there are a lot of researchers moving forward in this endeavor. The question is why? Is there something the rest of us are missing in regards to successful containment?
When I read topics related to safety, the language tends to be abstract. "We hope to achieve ...".
It seems to me that everyone side steps the initial logical conflict that proponents are prosing a lower intelligence is going to "outsmart" a higher intelligence.
Viewing a single comment thread. View all comments