Viewing a single comment thread. View all comments

Semyaz t1_iuqfxwq wrote

The most likely reason is that it gives an easy analogy between classical computing and quantum. The types of problems quantum are supposed to separate themselves with involve a lot of iterative steps for classical computers.

The output of classical computers can be larger than the number of bits that the processor works with. For instance, in some mathematics software you can do math with numbers much bigger than 2^64. Quantum computers can’t really do that. They have to fit each step of the problem into their bits at the same time to process. In essence, you can only solve problems that will output how ever many qbits you have, or less.

In fact, quantum computers must interface with classical computers to set the inputs and read the outputs. This requirement for interfacing with classical computers means that using a power of two makes some sense.

Finally, qubits represent a binary state, so the data representation is analogous to binary. The resulting qubits will always represent a 0 or a 1. For most computer scientists, it makes sense to conceptualize binary results in base 2.

These reasons are not important or required.

13