top of page
Search

The end of long-trusted encryption is already scheduled; how does the actual quantum risk keep up with the schedule?

  • Writer: Alexey
    Alexey
  • Jun 27
  • 8 min read

Widely deployed cryptographic algorithms like RSA and ECC—used virtually everywhere—are now on a retirement countdown. International standardisation bodies have mandated their deprecation by 2030 and a full ban by 2035.Meanwhile, the latest optimised blueprint for Shor's algorithm suggests that breaking a 2048-bit RSA key requires just one million noisy qubits and under a week of runtime. This article unpacks that timeline and explains the terms and jargon used along the way — exploring how risk affects prioritisation, what the role of standards is, and how the scheduled phase-out of current algorithms aligns with the actual pace of quantum computing progress.


Cryptographic standards define the gameplay, while the quantum risk story provides the context.


Quantum computing has moved conventional public-key cryptography toward obsolescence. International standards organisations—NIST (US), ANSSI (France), BSI (Germany), NCSC (UK), KISA (South Korea) and CNCA (China)—have already concluded that today's asymmetric primitives must be replaced by post-quantum alternatives. To the casual observer, classical RSA — a widely used public key cryptographic algorithm — and its post-quantum candidate, ML-KEM, may appear similar; in reality, only the latter is expected to withstand a quantum attack. Because only a small circle of institutions can independently vet cryptographic designs, commercial and governmental stakeholders generally adopt the rulings issued by these authorities.


These standards streamline complex technical decisions within each given organisation, yet monitoring technical progress still pays dividends. A realistic quantum-readiness timeline sharpens budget planning, project phasing and board communication. Clear explanations of why current cryptography is being retired also accelerate organisational buy-in. Standards now serve as the primary compass, and ongoing research supplies valuable context.



Computational assumptions that hold asymmetric algorithms secure appeared to be susceptible to quantum algorithms.


In the context of quantum risk, quantum computers are a central component in future attacks on modern cryptographic systems.

Within post-quantum security research, RSA continues to attract more scrutiny than newer asymmetric schemes such as elliptic-curve cryptography (ECC). The focus is somewhat arbitrary and partly historical—RSA has been the workhorse of internet security for decades. Although RSA's strength rests on the difficulty of factoring a large composite modulus, while ECC relies on the elliptic-curve discrete logarithm problem, both defences share a critical vulnerability: efficient period finding of an appropriate modular function. There are no known algorithms that can efficiently find the period of such a function. On classical hardware, period finding remains excruciatingly slow, but if there were, it would openboth RSA and ECC secrets. Shor's algorithm, the flagship quantum technique for breaking public-key cryptography, offers an efficient method of period finding, thereby subsuming the tasks of factoring and solving discrete logarithms in one stroke. Once the period of an appropriate modular function is uncovered, RSA's factorisation barrier collapses, and the private key emerges; the same principle undermines ECC keys through discrete logs. RSA serves as the standard test case not because it is uniquely weak or strong but because it is widely used and shares a common property with other asymmetric algorithms.


By contrast, many cryptographic primitives are built on very different foundations where efficient period finding is useless. Symmetric ciphers, hash functions, or random-number generators do not rely on the structure that period-finding exploits. Their security margins are, therefore, entirely indifferent to advances in implementing Shor's algorithm on a quantum computer.



The qubit number alone is not all that it takes to define an algorithmic need or quantum hardware capacity.


Modern asymmetric cryptographic systems remain secure—not because quantum attacks are impossible, unknown, or unachievable in principle, but because today's quantum hardware lacks the scale to run algorithms of sufficient length and depth to break real-world cryptographic protocols. As a result, current security relies on the slow progress of quantum hardware and algorithm development—a fragile and temporary foundation for long-term protection. While some argue that large-scale quantum machines may never be built, industry consensus, as discussed earlier in reference to standardisation body decisions, points in the opposite direction.


This raises a natural question: what do we really mean by "quantum computing capacity"? Media coverage often equates hardware milestones with qubit counts—spotlighting each new prototype that crosses a headline-grabbing threshold. But focusing solely on the number of qubits can give a misleading impression, as actual capacity depends on algorithmic demands and hardware characteristics well beyond raw qubit totals. True progress hinges on many interlocking metrics, such as:

  • Gate fidelity, the error rate in single- and two-qubit operations; 

  • Coherence time, the window before quantum states drift apart; 

  • Processor topology, the pattern of connections among qubits. Perfect "all-to-all" connectivity exists only in a handful of trapped-ion systems, whereas most platforms rely on lattice-style layouts that restrict which qubits can interact directly. 

  • Error-correction codes, that determine the cost of producing logical qubits — the building blocks of usefulquantum computation. It represents a stable, error-resistant unit constructed from many noisy, short-lived physical qubits, which on their own are too fragile for practical computation.

  • Algorithm design that determines the efficiency of how quantum computing resources are used. I will be writing more about it below.


In other words, progress in algorithmic efficiency and error-correction techniques matter just as much as new hardware headlines in terms of qubits and overall hardware performance.


A newly developed quantum algorithm enhances computational efficiency, reducing hardware demands.


A successful quantum attack relies on the seamless integration of multiple components, spanning from quantum hardware to finely tuned algorithms. A recent study (https://arxiv.org/abs/2505.15917) offers new insight on this front, estimating that breaking RSA-2048 could be done with fewer than one million noisy qubits in under a week. The paper builds on work dating back to 2019. It proposes a number of optimisations that significantly reduce the estimated quantum resources needed to factor a 2048-bit RSA key.


Since Peter Shor introduced his groundbreaking period-finding method in 1994—shaking the foundations of modern cryptography—steady progress has been made across the quantum computing stack components. Researchers have systematically reduced the resource demands of quantum algorithms while advancing supporting technologies, including control systems, error correction codes, and compilation techniques. In many cases, algorithmic research tends to concentrate on improving the core algorithm itself, alongside associated error correction methods, while other areas of the stack are often advanced by different segments of the research community. Combining smart algorithmic choices with the selection of an error correction code can significantly impact the total resource estimate—sometimes dramatically.


This new study introduces a significantly streamlined version of Shor's algorithm—achieved without relaxing any of the standard cost assumptions. To appreciate the scale of this advancement, consider earlier benchmarks: Fowler's 2012 surface-code blueprint estimated the need for around one billion physical qubits. A 2019 update brought that figure down to twenty million. Now, the new architecture reduces the requirement again—to roughly one million—marking a 1,000× drop since 2012 and a 20× improvement over the 2019 estimate.


Algorithmic improvements

Since 2012, researchers have introduced a series of engineering innovations that have steadily driven down the number of physical qubits required for Shor's algorithm. To illustrate how this progress took shape, here are three key advances that helped cut the requirement from 20 million to just one million qubits:

  1. Approximate residue arithmetic. Think of ordinary integer math carried out on a regular CPU: every add or multiply is exact. Here, the circuit intentionally allows tiny rounding errors to slip through—as long as they cancel out before the final answer appears—which eliminates a significant portion of supporting logic.

  2. Yoked surface codes. When a logical qubit is just "sitting in memory," its protective checks can be shared with a neighbour, much like two virtual machines sharing the same host firewall rules. This halves the physical space needed for parked data.

  3. Magic-state cultivation. In quantum computing, non-Clifford operations are quantum gates that fall outside the Clifford group—a set of gates including the Hadamard, Phase (S), and CNOT gates. Clifford gates have the distinctive property of being efficiently simulatable on classical computers, meaning circuits composed entirely of these gates can be simulated in polynomial time. While this makes them useful for certain tasks, it also means they offer no computational advantage over classical systems.

    To achieve universal quantum computation, however, non-Clifford operations are essential. Without them, a quantum computer cannot perform arbitrary quantum algorithms. Gates such as the T gate serve this crucial role, enabling the full power of quantum processing.

    Yet, non-Clifford operations come with significant challenges. Unlike Clifford gates, they are difficult to implement in a fault-tolerant manner and often require sophisticated techniques like magic state distillation. These operations are what give quantum computers their advantage—but also what makes them difficult to build and scale reliably.

    Older blueprints built huge magic-state factories that crank these states out in bulk. The new scheme grows them in small, incremental batches, similar to brewing micro-lots of a specialty chemical instead of running a refinery, slashing real-estate needs.


Time required to run the algorithm

Algorithm resources are only half the equation; wall-clock time matters, too. The same paper estimates that a 2048-bit RSA key could be cracked in fewer than seven days on a million-qubit machine. A week is not "real-time," but it is short enough to erode confidence in long-term encrypted archives. In practical terms, that timeline guides how organisations triage data and decide which systems to migrate to quantum-safe alternatives first.


Assumptions behind the algorithm

The resources and time needed to execute the algorithm are unsustainable without some foundational assumptions about hardware. It's easy to make an algorithm appear practical by assuming ideal conditions—such as near-perfect hardware with error rates approaching zero and operations running at gigahertz speeds. However, those assumptions would be far from today's engineering reality.

The paper that introduced this new algorithm avoids such extremes. Its performance estimates are grounded in assumptions that, while ambitious, remain physically plausible. So, what exactly does it assume?


A. Surface-code cycle time: 1 microsecond

  • What it means. A "cycle" is one round of error checks across the entire qubit grid. Finishing it in one microsecond (µs) allows the system to catch mistakes before they spread.

  • Where things stand. State-of-the-art superconducting chips measure qubits in roughly 0.5–2 µs, but only for devices with dozens of qubits. Maintaining that speed on a million-qubit lattice would require faster readout electronics and cryogenic wiring than are currently available.


B. Control-system reaction time: 10 microseconds

  • What it means. After each cycle, classical processors must decode the error pattern and send correction signals within 10 µs.

  • Where things stand.  FPGA-based decoders have hit sub-100-µs latencies for small codes. Scaling tenfold faster while handling a thousand-fold larger data stream remains an open engineering problem, though specialised decoders are under active development.


C. Target logical error rate: 10⁻¹⁵

  • What it means. After error correction, the chance that a logical qubit flips during one operation should be no worse than one in a quadrillion.

  • Where things stand. Recent demonstrations report logical error rates of around 10⁻⁵. Reaching 10⁻¹⁵ demands roughly a million-fold improvement, which is plausible in theory but far beyond current laboratory practice.


D. ~1,000 physical qubits per logical qubit (code distance 25)

  • What it means. The efficiency of the error correction code and the number of physical qubits needed to build a reliable logical qubit.

  • Where things stand. Experiments have reached code distance 5 with ~50 qubits. Distance 25 is conceptually straightforward but calls for uniform fabrication and calibration across chips, two orders of magnitude larger than today's best.


E. Nearest-neighbor, square lattice connectivity

  • What it means. Every qubit talks only to its four immediate neighbors, simplifying control but imposing strict layout rules.

  • Where things stand. Current quantum-computing platforms – superconducting qubits, neutral-atom arrays, photonics, and more – differ markedly in geometry and control. As a result, mapping an algorithm's nearest-neighbour constraint onto a real-world lattice is invariably non-trivial.


F. Uniform gate error rate: 0.1 % (99.9 % fidelity)

  • What it means. All single- and two-qubit operations must fail less than once in a thousand runs.

  • Where things stand. Record single-qubit fidelities sit at 99.998 % and isolated two-qubit gates at 99.9–99.94 % on chips containing only a few interacting qubits. Scaling that uniform performance to the millions of qubits needed for fault-tolerant computers is overwhelmingly an engineering problem—fabrication spread, wiring density, crosstalk, and radiation-induced correlated errors—not a show-stopper in fundamental physics.


In short, none of the paper's assumptions breaks known limits, but everyone stretches current technology to its next growth curve. Large-scale quantum factoring remains a post-prototype goal—plausible in principle, yet several focused engineering leaps away from present-day machines.



Summary

I just want to cite the last paragraph from the author of the paper I presented to you above.

I cannot plausibly claim that a 2048-bit RSA integer could be factored with a hundred thousand noisy qubits. But there's a saying in cryptography: "Attacks always get better". Over the past decade, that has held true for quantum factoring. Looking forward, I agree with the initial public draft of the NIST internal report on the transition to post-quantum cryptography standards: vulnerable systems should be deprecated after 2030 and disallowed after 2035. Not because I expect sufficiently large quantum computers to exist by 2030 but because I prefer security not to be contingent on progress being slow.


 
 
 

Comments


© 2025 by Alexey Bocharnikov

bottom of page