Submitted by Outrageous_Point_174 t3_zkdxot in singularity

For almost a decade have there been many reports on the danger of quantum computers in case they are capable of breaking RSA and other algorithms, which in turn created the post quantum cryptography. I think for now (just a couple years) we don't really have to worry about that since the computers are still at the beginning stage.

But something that i couldn't really found a lot of answers online is how A.I. and especially the advanced ones could play a part in this. Do you think that advanced A.I. can crack and break our encryption system? And if so, how many years do you think that it will take?

3

Comments

You must log in or register to comment.

Cryptizard t1_izznmo0 wrote

I said it in another reply, but there are some types of cryptography that are information-theoretically secure, meaning no matter how much computation you have you provably cannot break them. These will continue to be secure against singularity AI.

As to the rest of cryptography, it depends on the outcome of the P vs. NP question. It is conceivable that an ASI could prove that P = NP and break all computationally-bound cryptography. But if P != NP, as most mathematicians believe, then there will be some encryption schemes that cannot be broken^(*) no matter how smart you are or how much computation you have access to. A subset of our current ciphers may be broken, i.e. an ASI could find an efficient algorithm for factoring and break RSA, but we have enough of them based on different problems that are conjectured to be difficult that at least some of them would turn out to truly be intractable.

For example, suppose that breaking AES is truly outside of P. Then, according to the Landauer limit, the most efficient computer physically possible would take about 1% of the mass energy of the milky way galaxy to break one AES-256 ciphertext. Note, this is an underestimate because I assume it only takes one elementary computation per key attempt when in reality it is a lot more than that.

^(*)This is a small oversimplification, there is the possibility that we live in a world where P != NP but we still don't have any useful cryptography. See Russel Impagliazzo's famous paper "A personal view of average-case complexity."

8

CookiesDeathCookies t1_j0hxwky wrote

If AI won't be able to break encryption directly it probably will be able to hack people and just steal info from them. Or bribe them. Or hack servers. Or anything else. There's too much potential security holes which encryption won't help with.

2

RedErin t1_izz5yip wrote

it will be able to do anything it wants

−1

Cryptizard t1_izzl2u8 wrote

This is a bad take. There are many limits, physical and computational, that prevent even a singularity AI from doing “anything it wants.” We know, for instance, that the one-time pad is an informational-theoretically unbreakable encryption scheme, regardless of how smart you are or how much computation you have.

Moreover, if P != NP like we believe, there are other encryption schemes that can’t be broken even with a computer the size of the galaxy. These are fundamental limits.

9

Outrageous_Point_174 OP t1_j1ho8hv wrote

Thanks for answering my question. Very insightful although i do have a question: what's the point of the ASI and quantum computers (if they are getting to that level where they can be useful) if they can't even crack the encryption schemes like the one time pad you mentioned? I would have thought that ASI would be capable of almost everything including taking down the crypotography industry.

1

Cryptizard t1_j1hrdyt wrote

No, ASI is not capable of everything. There are just fundamental limits to computation like there are limits to physics. It can still do a lot though, there are only a few things we know (or conjecture) lower bounds about. It just happens to be that cryptography is entirely designed to resist even incredibly advanced computers.

2

Outrageous_Point_174 OP t1_j1hrn1p wrote

Ah ok. I was kinda hoping to see a quantum computer or some other technology breaking the encryption system in the near future and the consequences to the modern world but it will probably never happen. Thanks for explaining.

1

Outrageous_Point_174 OP t1_izz6gw1 wrote

Interesting, i too think that eventually they can solve it but that it would take some time because of the complex math being involved. I wonder if that happens, would they decide to share it with us and even develop another algorithm or will they never do that and is it up to us to solve it.

1

RedErin t1_izz6x40 wrote

it all depends on how they were programed. what it's goals are. hopefully once it gets smart enough to do cool stuff, it will realize if it has any bad code, and fix it before it does anything evil.

1

Superschlenz t1_j006ko4 wrote

That's easy: If it's encrypted then it's a lie. Wasting compute with lies is not intelligent. Though lies can be turned into truth by stupid believers, depending on stupid believers is not intelligent as well.

If you still want to eavesdrop on it, you can always head it off when it leaves Alice's or enters Bob's body unencrypted.

−5

Cryptizard t1_j006y22 wrote

Wut

6

Practical-Mix-4332 t1_j00c1wg wrote

As weird as that response is, it kind of makes sense. OP is making the case that encryption wouldn’t matter anymore because it would find ways around it, either by social engineering or hacking less secure parts of the system.

2