Submitted by dracount t3_zwo5ey in singularity
mootcat t1_j1wfb3v wrote
Reply to comment by Calm_Bonus_6464 in Concerns about the near future and the current gatekeepers of AI by dracount
Indeed. This sub has major issues conceptualizing superintelligence, thinking we will get all our wishes fulfilled as a guarantee.
We are functionally growing a God. There is no containing it and we better hope our efforts at alignment before the point of explosive recursive growth were enough.
Just from the simple system we've seen so far, we have witnessed countless examples of misalignment and systems working literally as intended, but against the desires of the programmers.
This Rumsfeld quote always comes to mind
"Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."
Any one of these unknown unknowns can result in utter decimation of life in an AI superpower.
Viewing a single comment thread. View all comments