I claimed that quantum computing would change our world in my previous post. Therefore, the time is now to start learning quantum computing.
One of my readers asked the legitimate question of when I expect the quantum revolution to happen.
Of course, it would be presumptuous to say I knew when this revolution would happen. After all, I am not an oracle.
Therefore, all I can give is my two quanta.
The expression “my two cents” is shorthand for “my two pennies worth.” It is a way of offering one’s opinion and saying that it is only worth two pennies. While I like this humble way of valuing an opinion at only two cents, it is inappropriate in the context of quantum computing. For a quantum (plural quanta) is the smallest possible discrete unit of any physical property, I believe “my two quanta” is a better measure of my opinion.
TL;DR: My answer is: “We are experiencing this revolution already!”
So, let me go back a little to explain why I believe that we’re in the middle of this revolution.
I want to start at my birth in 1981. But, no, don’t worry. This won’t be too much of a personal story. Yet coincidentally, in 1981, Nobel Prize-winner (1965) Richard P. Feynman proposed to use the phenomena of quantum physics to perform computations.
It took eleven years until David Deutsch came up with a quantum algorithm that could be exponentially faster than any classical algorithm in 1992. See this post for an illustrative description of this algorithm. However, he solved a toy problem and did not receive much attention beyond academia.
Another three years later, in 1994, Peter Shor showed how a quantum computer could factor a number in polynomial time — a problem whose complexity grows exponentially when being solved classically. The difference between polynomial and exponential complexity must not be underestimated. In other words, a quantum computer could solve a task intractable for any conceivable classical computer.
However, quantum computation remained solely theoretical. At the time, we did not even know whether such a machine was possible at all.
But in 1998, the first two-qubit computer became a reality. This was the proof of concept if you will.
Another 21 years later, in 2019, Google claimed to have reached quantum advantage. They performed a task in 200 seconds that would take a classical supercomputer about 10,000 years to complete.
Most recently, on May 10th, 2022, IBM updated its quantum roadmap. They plan to provide 1,000 qubit chips in 2023 and 4,158+ qubits in 2025.
When looking at the history of quantum computing, we can see a clear upswing in the number of qubits right now. Of course, a 1,000-qubit computer won’t run Shor’s algorithm. But the sole number and the roadmap show that we enter a phase where we see a quantum version of Moore’s law. We see the number of qubits double every year.
Quantum computing hardware inevitably transits into the mainstream.
How about the software?
While the hardware manufacturers seem to keep their promise to deliver, the urgent question is whether we develop the algorithms that use these machines to produce real business value.
Shor’s algorithm is now 28 years old. And it seems as if there has been no real progress since then. But this notion could not be further from the truth.
The reason is that problem solving does not happen in public. Let’s take financial institutions, for instance. Some already invest significantly into quantum computing to develop new algorithms. It will be a big surprise if they don’t create new solutions to their particular problems. However, they do not report much of it to keep their advantage.
I want to compare the quantum processing units (QPU) with the graphics processing units (GPU). The original purpose of GPUs is to calculate graphics to output on your screen. Coincidentally, our calculations to train deep neural networks are pretty similar. These special-purpose chips provided an unanticipated boost for the development of machine learning algorithms.
Today, we see the results, such as automated translation of texts and self-driving cars. But the revolution dates back to 2006 when GPUs were used for Deep Learning for the first time. It took persons who knew the details of the technology — the GPU. And even more importantly, these persons had a deep understanding of the problem domain — training neural networks.
The QPUs we will have in the near future are powerful devices. They are suitable for solving many problems exponentially faster than any classical computer. All it takes now is smart persons who know about quantum computing and a lot about their problem domains.
It may take a few years or even a decade until we see the results of this revolution. But the revolution of how we solve the urgent problems of humanity is happening now.
So, in my opinion, if you are an expert in any problem domain, from finance to agriculture to astrophysics, the time to get started with quantum computing is now.