Landscape of Quantum Computing in 2022
See here for the 2023 update, and the 2022 update.
Last year I wrote a brief explainer of where quantum computing was and where it needs to go, specifically to break crypto. There's been some progress in quantum computing, so I've updated the chart below:
Check last year's post for the explaination of the chart. The key differences this year:
- Google implemented a surface code on their "Sycamore" chip, which they also expanded to 72 qubits. Like the results from other groups last year, the logical qubit is worse than the physical qubits, but the exciting fact is that when they use more physical qubits, the logical qubit gets better. This means if they had enough qubits, they should be able to make a logical qubit that works better than the physical qubits.
- The other big news: IBM released data from its 127-qubit Eagle processor. I have a cynical hypothesis: it wasn't quite ready to demonstrate when they announced it last year, but they had to make an announcement since they had just made their quantum roadmap in 2020, and it would look really bad to fall behind only one year after making the plan. I'm still impressed, even if it's slightly late. They have also announced a new 433-qubit chip, which I have again excluded from the chart because there is no data on qubit quality yet.
- I plotted both IBM and Google's latest results in a box-and-whisker, to show the variance in quality of individual qubits.
- The "Useful quantum chemistry without error correction" moved down, based on the conclusions in this paper. The improvement is "error mitigation", a collection of techniques that are simpler than full error correction, but offer fewer guarantees. I still can't claim that the chemistry will actually be "useful" (I don't think the paper does either), just that it might answer a real chemistry problem that classical computers can't.
If you compare this to last year's chart it seems like there were big breakthroughs in qubit quality. I'm not sure, since my methods of producing a single number for "qubit error" weren't rigorous. There seems to be progress, but I wouldn't claim any precise trends.
Other news that doesn't fit on the chart:
- There's been some work on error correction where the logical qubit actually lasts longer than the physical qubits (e.g. here and here), but I'm not sure the error correction they used will scale very well .
- There were huge breakthroughs in quantum "LDPC" codes, which promise much better performance than surface codes. The catch: they need arbitrary long interactions between qubits. In contrast, the surface code can use qubits that only interact with their neighbours in a grid (Google and IBM's qubits are on a grid, Honeywell's qubits are not). For this reason I'm not optimistic about LDPC codes, but other people are.
Besides those, not much has changed. The main conclusions from last year still hold:
- We need exponential improvement to get anywhere.
- There is a big dead zone where quantum computers might be completely useless.
- Switching to quantum-safe cryptography today is still a good idea (this year NIST announced which quantum-safe cryptography it will standardize).
Chart methodology
(see last year's page)
- Current chips:
- Google 2022 came from here, where I used the CZ error for the box-and-whisker chart. I listed the number of qubits as 72 (the total on the device) even though the paper only uses 49 (so maybe other qubits are much worse or non-functional).
- IBM 2022 came from live data, which I captured on Nov 10. I took the worst of all the types of error for each qubit/pair of qubits.
- For Honewell, I used the worst error type (measurement) from their simulation parameters (which are based on experiment and worse than all experimental error types they list in that paper)