Viewing a single comment thread. View all comments

Cryptizard t1_izznmo0 wrote

I said it in another reply, but there are some types of cryptography that are information-theoretically secure, meaning no matter how much computation you have you provably cannot break them. These will continue to be secure against singularity AI.

As to the rest of cryptography, it depends on the outcome of the P vs. NP question. It is conceivable that an ASI could prove that P = NP and break all computationally-bound cryptography. But if P != NP, as most mathematicians believe, then there will be some encryption schemes that cannot be broken^(*) no matter how smart you are or how much computation you have access to. A subset of our current ciphers may be broken, i.e. an ASI could find an efficient algorithm for factoring and break RSA, but we have enough of them based on different problems that are conjectured to be difficult that at least some of them would turn out to truly be intractable.

For example, suppose that breaking AES is truly outside of P. Then, according to the Landauer limit, the most efficient computer physically possible would take about 1% of the mass energy of the milky way galaxy to break one AES-256 ciphertext. Note, this is an underestimate because I assume it only takes one elementary computation per key attempt when in reality it is a lot more than that.

^(*)This is a small oversimplification, there is the possibility that we live in a world where P != NP but we still don't have any useful cryptography. See Russel Impagliazzo's famous paper "A personal view of average-case complexity."

8