Top
Best
New

Posted by A_D_E_P_T 12/21/2025

More on whether useful quantum computing is “imminent”(scottaaronson.blog)
152 points | 125 comments
prof-dr-ir 12/21/2025|
I am confused, since even factoring 21 is apparently so difficult that it "isn’t yet a good benchmark for tracking the progress of quantum computers." [0]

So the "useful quantum computing" that is "imminent" is not the kind of quantum computing that involves the factorization of nearly prime numbers?

[0] https://algassert.com/post/2500

Strilanc 12/22/2025||
Factoring will be okay for tracking progress later; it's just a bad benchmark now. Factoring benchmarks have little visibility into fault tolerance spinning up, which is the important progress right now. Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.
prof-dr-ir 12/22/2025||
> Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.

Either this relation is not that strong, or factoring should "imminently" become a reasonable benchmark, or useful quantum computing cannot be "imminent". So which one is it?

I think you are the author of the blogpost I linked to? Did I maybe interpret it too negatively, and was it not meant to suggest that the second option is still quite some time away?

adgjlsfhk1 12/22/2025||
Under a Moore's law-like scenario, factoring 21 happens only about ~5 years before factoring a 1024 bit number. With all the optimizations, factoring an n bit number only requires ~n logical qbits, but most of those optimizations only work for large n, so 21 is only ~5 doubles away from 2^1024.

the other problem is that factoring 21 is so easy that it actually makes it harder to prove you've factored it with a functional quantum computer. for big numbers, your program can fail 99% of the time because if you get the result once, you prove that the algorithm worked. 21 is small enough that it's hard not to factor, so demonstrating that you've factored it with a qc is fairly hard. I wouldn't be surprised as a result if the first number publicly factored by a quantum computer (using error correction) was in the thousands instead of 21. By using a number that is not absolutely tiny, it becomes a lot easier to show that the system works.

upofadown 12/22/2025|||
Perhaps? The sort of quantum computers that people are talking about now are not general purpose. So you might be able to make a useful quantum computer that is not Shor's algorithm.
gsf_emergency_6 12/22/2025|||
Simulating the Hubbard model for superconductors at large scales is significantly more likely to happen sooner than factoring RSA-2048 with Shor’s algorithm.

Google have been working on this for years

Don't ask me if they've the top supercomputers beat, ask Gemini :)

tomalbrc 12/22/2025||
Gemini hallucinated me a wild answer.
bawolff 12/22/2025|||
I don't think that's correct, the research projects the article is talking about all seem to aim at making general purpose quantum computers eventually. Obviously they haven't succeded yet, but general purpose does seem to be what they are talking about.
unparagoned 12/24/2025|||
Basically qc are so far from ever doing a useful computation we need other benchmarks to measure progress. We need to be thinking in timelines of our lifetime not 5 years
bawolff 12/21/2025||
I always find this argument a little silly.

Like if you were building one of the first normal computers, how big numbers you can multiply would be a terrible benchmark since once you have figured out how to multiply small numbers its fairly trivial to multiply big numbers. The challenge is making the computer multiply numbers at all.

This isn't a perfect metaphor as scaling is harder in a quantum setting, but we are mostly at the stage where we are trying to get the things to work at all. Once we reach the stage where we can factor small numbers reliably, the amount of time to go from smaller numbers to bigger numbers will be probably be relatively short.

jvanderbot 12/21/2025|||
From my limited understanding, that's actually the opposite of the truth.

In QC systems, the engineering "difficulty" scales very badly with the number of gates or steps of the algorithm.

Its not like addition where you can repeat a process in parallel and bam-ALU. From what I understand as a layperson, the size of the inputs is absolutely part of the scaling.

dmurray 12/22/2025||
But the reason factoring numbers is used as the quantum benchmark is exactly that we have a quantum algorithm for that problem which is meant to scale better than any known algorithm on a classical computer.

So it seems like it takes an exponentially bigger device to factor 21 than 15, then 35 than 21, and so on, but if I understand right, at some point this levels out and it's only relatively speaking a little harder to factor say 10^30 than 10^29.

Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

bawolff 12/22/2025|||
> Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

I don't think we have any "real" experience scaling from 15 to 21. Or at least not in the way shor's algorithm would be implemented in practise on fault tolerant qubits.

We haven't even done 15 yet in a "real" way yet. I susect the amount of time to factor 15 on fault tolerant qubits will be a lot longer than the time to go from 15 to 21.

pclmulqdq 12/22/2025|||
The algorithm in question is a hypothetical algorithm for a hypothetical computer with certain properties. The properties in question are assumed to be cheap.

In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.

bawolff 12/22/2025||
> In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.

As far as i understand, that isn't an assumption.

The assumption is that the SNR of logical (error-corrected) qubits is near infinite, and that such logical qubits can be constructed from noisey physical qubits.

pclmulqdq 12/22/2025|||
There are several properties that separate real quantum computers from the "BQP machine," including decoherence and SNR. Error-correction of qubits is mainly aimed at decoherence, but I'm not sure it really improves SNR of gates on logical qubits. SNR dictates how precisely you can manipulate the signal (these are a sort of weird kind of analog computer), and the QFTs involved in Shor's algorithm need some very precise rotations of qubits. Noise in the operation creates an error in that rotation angle. If your rotation is bad to begin with, I'm not sure the error correction actually helps.
seanhunter 12/22/2025|||
> The assumption is that the SNR of logical (error-corrected) qubits is near infinite, and that such logical qubits can be constructed from noisey physical qubits.

This is an argument I've heard before and I don't really understand it[1]. I get that you can make a logical qubit out of physical qubits and build in error correction so the logical qubit has perfect SNR, but surely if (say the number of physical qubits you need to get the nth logical qubit is O(n^2) for example, then the SNR (of the whole system) isn't near infinite it's really bad.

[1] Which may well be because I don't understand quantum mechanics ...

adgjlsfhk1 12/22/2025||
The really important thing is that logical qbit error decreases exponentially with error correction amount. As such, for the ~1000 qbit regime needed for factoring, the amount of error correction ends up being essentially a constant factor (~1000x physical to logical). As long as you can build enough "decent" quality physical qbits and connect them, you can get near perfect logical qbits.
JanisErdmanis 12/22/2025||
Having demonstrated error correction, some incremental improvements can now be made to make it more repeatable and with better characteristics.

The hard problem then remains how to connect those qubits at scale. Using a coaxial cable for each qubit is impractical; some form of multiplexing is needed. This, in turn, causes qubits to decohere while waiting for their control signal.

dboreham 12/22/2025||||
This is quite falatious and wrong. The first computers were built in order to solve problems immediately that were already being solved slowly by manual methods. There never was a period where people built computers so slow that they were slower than adding machines and slide rules, just because they seemed cool and might one day be much faster.
bawolff 12/22/2025||
Charles babbage started on the difference engine in 1819. It took a very long time after that before computers were useful.
bawolff 12/22/2025||
Additionally, part of the problem was that metal working at the time wasn't really advanced enough to make the required parts to the necessary precision at a reasonable price. Which sounds really quite similar to how modern quantum computers are right at the edge of current engineering technology.
sfpotter 12/21/2025||||
The fact that it does appear to be so difficult to scale things up would suggest that the argument isn't silly.
thrance 12/21/2025|||
Actually yes, how much numbers you can crunch per second and how big they are were among the first benchmarks for actual computers. Also, these prototypes were almost always immediately useful. (Think of the computer that cracked Enigma).

In comparison, there is no realistic path forward for scaling quantum computers. Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live. That is a fundamental physical truth. And since they're still struggling to do anything at all with a quantum computer, don't get your hopes up too much.

amenhotep 12/22/2025|||
That would be the bombe, which didn't really crunch numbers at all, but was an electromechanical contraption to automate physically setting Enigma rotors to enumerate what combinations were possible matches.
bawolff 12/22/2025|||
> Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live.

If what you are saying is that error rates increase exponentially such that quantum error correction can never correct more errors than it introduces, i don't think that is a widely accepted position in the field.

fragmede 12/22/2025||
People who believed that would opt out of being in the field, though. No?
tromp 12/21/2025||
I realize this is a minority opinion, and goes against all theories of how quantum computing works, but I just cannot believe that nature will allow us to reliably compute with amplitudes as small as 2^-256. I still suspect something will break down as we approach and move below the planck scale.
semi-extrinsic 12/21/2025||
Fun fact: the Planck mass is about 22 micrograms, about the amount of Vitamin D in a typical multivitamin supplement, and the corresponding derived Planck momentum is 6.5 kg m/s, which is around how hard a child kicks a soccer ball. Nothing inherently special or limiting about these.
ttoinou 12/22/2025|||
If you look at Planck units or any dimensionless set of physical units, you will see that mass stands apart from others units. There’s like a factor 10^15 or something like this, i.e. we can’t scale all physical units to be around the same values, something is going with mass and gravity that makes it different than others
adrian_b 12/22/2025||
After computing in 1899 for the first time the value of what is now named "Planck's constant" (despite the fact that Planck has computed both constants that are now named "Boltzmann's constant" and "Planck's constant"), Planck has immediately realized that this provides an extra value, besides those previously known, which can be used for the definition of a natural unit of measurement.

Nevertheless, Planck did not understand well enough the requirements for a good system of fundamental units of measurement (because he was a theoretician, not an experimentalist; he had computed his constants by a better mathematical treatment of the experimental data provided by Paschen), so he did not find any good way to integrate Planck's constant in a system of fundamental units and he has made the same mistake made by Stoney 25 years before him (after computing the value of the elementary electric charge) and he has chosen the wrong method for defining the unit of mass among two variants previously proposed by Maxwell (the 2 variants were deriving the unit of mass from the mass of some atom or molecule and deriving the unit of mass from the Newtonian constant of gravitation).

All dimensionless systems of fundamental units are worthless in practice (because they cause huge uncertainties in all values of absolute measurements) and they do not have any special theoretical significance (for now; such a significance would appear only if it became possible to compute exactly from theory the values of the 2 constants of the electromagnetic interaction and gravitational interaction, instead of measuring them through experiments; until now nobody had any useful idea for a theory that could do such things).

For the number of independently chosen fundamental units of measurement there exists an optimum value and the systems with either more or fewer fundamental units lead to greater uncertainties in the values of the physical quantities and to superfluous computations in the mathematical models.

The dimensionless systems of units are not simpler, but more complicated, so attempting to eliminate the independently chosen fundamental units is the wrong goal when searching for the best system of units of measurement.

My point is that the values of the so-called "Planck units" have absolutely no physical significance, therefore it is extremely wrong to use them in any reasoning about what is possible or impossible or about anything else.

The "Planck units" are not unique, there also exists a very similar system of "Stoney units", proposed a quarter of century before the "Planck units", where the values of the units are different, and there are also other variants of dimensionless systems of units proposed later. None of them is better than the others and all are bad, the worst defect being that the huge experimental uncertainties from measuring the value of the Newtonian constant of gravitation are moved from that single value into the values of all unrelated physical quantities, so that no absolute value can be known precisely, but only the ratios between quantities of the same kind.

In a useful system of fundamental units, for all units there are "natural" choices, except for one, which is the scale factor of the spatio-temporal units. For this scale factor of space-time, in the current state of knowledge there is no special value that can be distinguished from other arbitrary choices, so it is chosen solely based on the practical ease of building standards of frequency and wave-number that have adequate reproducibility and stability.

The only historical value of the "Planck units" is that they provide a good example of how one should NOT choose a system of units of measurement. The fact that they are still frequently mentioned by some people in any other context than criticizing such a system just demonstrates the very sad state of physics education, where no physics textbook includes an adequate presentation of the foundation of physics, which is the theory of the measurement of physical quantities. One century and a half ago, Maxwell began his treatise on electricity and magnetism with a very good exposition of the state of metrology at that time, but later physics textbooks have become less and less rigorous, instead of improving.

drdeca 12/23/2025||
There are thought experiments where the various Planck units (or, times some small power of pi or 2pi) pop out quite naturally.

The theoretical entropy for a Schwartzchild black hole is nicely expressed using the Planck area.

So…

No. Your assertion that they have no value in theory, is wrong.

(Also, like, angular momentum is quantized in multiples of hbar or hbar/2 or something like that.)

It may be true that they aren’t a good system of units for actual measurements, on account of the high uncertainty (especially for G).

But, there is a reason why it is common to use units where G=c=hbar=1 : it is quite convenient.

bawolff 12/22/2025|||
In some ways i think that is the most exciting possibility. If attempts at making quantum computers let us find exactly where the current theories break down and probe how that happens, it will probably be one of the most important physics discoveries of the century.
elcritch 12/23/2025|||
Personally I just hope there’s a buffer overflow (underflow?) that lets us jailbreak this universe to get warp speed or FTL comms!

More realistically it breaking down would hopefully give us a new physics frontier.

jlokier 12/22/2025|||
The amplitudes aren't small in the 512-dimensional subspace where 256-qubit calculations take place.

2^256 states are comfortably distinct in that many dimensions with amplitude ~1. Their distinctness is entirely direction.

The obvious parallels to vector embeddings and high-dimensional tensor properties have some groups working out how to combine them in "quantum AI", and because that doesn't require the same precision (like trained neurel nets still work usefully after heavy quantization and noise), quantum AI might arrive before regular quantum computation, and might be feasible even if the latter is not.

krastanov 12/22/2025|||
The magnitude of an "amplitude" is basis dependent. A basis is a human invention, an arbitrary choice made by the human to describe nature. The choice of basis is not fundamental. So just choose a basis in which there are no vanishingly small amplitudes and your worry is addressed.
tromp 12/22/2025|||
Any implementation of Shor will need vanishingly small amplitudes, as it forms a superposition of 2^256 classical states.
krastanov 12/23/2025||
This is completely missing the point. There is nothing fundamental to an amplitude. The amplitudes are this small because you have chosen to work in a basis in which they are small. Go to the Hadamard basis and the amplitude value is exactly 1. After all, the initial state of Shor's algorithm (the superposition of all classical bitstrings) is the perfectly factorizable, completely not entangled state |+++++++>
tromp 12/23/2025|||
The initial state of Shor's algorithm just has the n-bit number to be factored. From there it creates the superposition in the next n steps.

Forget the talk about amplitudes. What I find hard to believe is that nature will let us compute reliably with hundreds of entangled qubits.

krastanov 12/23/2025||
Shor's algorithm does not start with the qubits storing anything related to the n-bit number to be factored. The n-bit number is encoded *only* in the XOR-oracle for the multiplication function.

Shor's algorithm starts with the qubits in a superposition of all possible bitstrings. That is the only place we have exponentially small amplitudes at the start (in a particular choice of a basis), and there is no entanglement in that state to begin with.

We do get interesting entangled states after the oracle step, that is true. And it is fair to have a vague sense that entanglement is weird. I just want to be clear that your last point (forgetting about amplitudes, and focusing on the weirdness of entangled qubits) is a gut feeling, not something based in the mathematics that has proven to be a correct description of nature over many orders of magnitude.

Of course, it would be great if it turns out that quantum mechanics is wrong in some parameter regime -- that would be the most exciting thing in Physics in a century. There is just not much hope it is wrong in this particular way.

oh_my_goodness 12/23/2025|||
When the amplitude has norm 1, there is only one nonzero amplitude. Changing basis does not affect the number of basis functions.
krastanov 12/23/2025||
> When the amplitude has norm 1, there is only one nonzero amplitude.

Yes, that is exactly the point. The example statevector you guys are talking about can (tautologically) be written in a basis in which only one of its amplitudes is nonzero.

Let's call |ψ⟩ the initial state of the Shor algorithm, i.e. the superposition of all classical bitstrings.

|ψ⟩ = |00..00⟩ + |00..01⟩ + |00..10⟩ + .. + |11..11⟩

That state is factorizable, i.e. it is *completely* unentangled. In the X basis (a.k.a. the Hadamard basis) it can be written as

|ψ⟩ = |00..00⟩ + |00..01⟩ + |00..10⟩ + .. + |11..11⟩ = |++..++⟩

You can see that even from the preparation circuit of the Shor algorithm. It is just single-qubit Hadamard gates -- there are no entangling gates. Preparing this state is a triviality and in optical systems we have been able to prepare it for decades. Shining a wide laser pulse on a CD basically prepares exactly that state.

> Changing basis does not affect the number of basis functions.

I do not know what "number of basis functions" means. If you are referring to "non zero entries in the column-vector representation of the state in a given basis", then of course it changes. Here is a trivial example: take the x-y plane and take the unit vector along x. It has one non-zero coefficient. Now express the same vector in a basis rotated at 45deg. It has two non-zero coefficients in that basis.

---

Generally speaking, any physical argument that is valid only in a single basis is automatically a weak argument, because physics is not basis dependent. It is just that some bases make deriving results easier.

Preparing a state that is a superposition of all possible states of the "computational basis" is something we have been able to do since before people started talking seriously about quantum computers.

oh_my_goodness 12/23/2025||
Sounds like we agree on how basis vectors work. But you’re talking about the initial state, and I’m talking about the output. Finding a basis that makes the output an eigenvector isn’t trivial. Take Grover’s algorithm. You have to iterate to approximate that eigenvector. Small errors in the amplitudes can prevent convergence. When you have 2^256 components, amplitudes are divided down by around 2^128.

Even preparing the initial state that accurately is only trivial on paper.

krastanov 12/23/2025||
The initial state was the example given. It is fair to then point out the consecutive states though. A few points still hold:

- I am not saying that you have to find a basis in which your amplitudes are not small, I am saying that such a basis always exists. So any argument about "small amplitudes would potentially cause problems" probably does not hold, because there is no physical reality to "an amplitude" or "a basis" -- these are all arbitrary choices and the laws of physics do not change if you pick a different basis.

- In classical probability we are not worried about vanishingly small probabilities in probability distributions that we achieve all the time. Take a one-time pad of n bits. Its stochastic state vector in the natural basis is filled with exponentially small entries 1/2^n. We create one-time pads all the time and nature does not seem to mind.

- Most textbooks that include Shor's algorithm also include proof that you do not need precise gates. Shor's algorithm (or the quantum Fourier transform more specifically) converges even if you have finite absolute precision of the various gates.

- Preparing the initial state to extremely high precision in an optical quantum computer is trivial and it has been trivial for decades. There isn't really much "quantum" to it.

- It is fair to be worried about the numerical stability of a quantum algorithm. Shor's algorithm happens to be stable as mentioned above. But the original point by OP was that physics itself might "break" -- I am arguing against that original point. Physics, of course, might break, and that would be very exciting, but that particular way of it breaking is very improbable (because of the rest of the points posted above).

oh_my_goodness 12/23/2025||
I don’t think we can discuss precision usefully without numbers. We seem to agree on the word “finite” but that covers a lot of ground. “High” precision in rotating an optical polarization to exactly 45 degrees is maybe 60 dB, +- 0.0001% of the probability. That means the amplitudes are matched within 0.1%. 0.1% is fine for two qubits with 4 states. It might work for 8 qubits (256 states). For 256 qubits, no.
oh_my_goodness 12/23/2025||
Gah, I wrote the wrong thing. If each probability is 50% +- 10^{-6} then the amplitudes are matched to within around 2 times 10^{-6}.

But when N>2 this gets tougher rapidly.

If we add 10^12 complex amplitudes and each one is off by one part in 10^{-6}, we could easily have serious problems with the accuracy of the sum. And 10^12 amplitudes is "only" around 40 qubits.

oh_my_goodness 12/22/2025|||
1/sqrt(N)
cevi 12/24/2025||
Are you also uncomfortable with the idea of flipping 256 unbiased coins independently?
yoan9224 12/22/2025||
Aaronson's take is characteristically grounded. The Willow chip announcement was impressive technically but the media coverage predictably overshot into "RSA is dead" territory when the actual achievement was improving error correction rates. The relevant timeline question is: when do quantum computers solve problems faster than classical computers for commercially useful tasks (not just contrived benchmarks)?

The error correction milestone matters because it's the gate to scaling. Previous quantum systems had error rates that increased faster than you could add qubits, making large-scale quantum computing impossible. If Willow actually demonstrates below-threshold error rates at scale (I'd want independent verification), that unblocks the path to 1000+ logical qubit systems. But we're still probably 5-7 years from "useful quantum advantage" on problems like drug discovery or materials simulation.

The economic argument is underrated. Even if quantum computers achieve theoretical advantage, they need to beat rapidly improving classical algorithms running on cheaper hardware. Every year we delay, classical GPUs get faster and quantum algorithms get optimized for near-term noisy hardware. The crossover point might be narrower than people expect.

What I find fascinating is the potential for hybrid classical-quantum algorithms where quantum computers handle specific subroutines (like sampling from complex distributions or solving linear algebra problems) while classical computers do pre/post-processing. That's probably the first commercial application - not replacing classical computers entirely but augmenting them for specific bottlenecks. Imagine a drug discovery pipeline where the 3D protein folding simulation runs on quantum hardware but everything else is classical.

OkayPhysicist 12/22/2025|
First? Try only. I'd be willing to wager a sizeable amount of money that no one save for a few niche research institutions trying to improve quantum computing will ever be using fully quantum setups.

QC is not a panacea. There are a handful of algorithms that are in BQP-P, and most of those aren't really used in tasks I would imagine the average person frequently engaging in. Simultaneously, quantum computers necessarily have complications that classical computers lack. Combined, I doubt people will be using purely quantum computers ever.

Aardwolf 12/21/2025||
Once quantum computers are possible, is there actually anything else, any other real world applications, besides breaking crypto and number theory problems that they can do, and do much better than regular computers?
comicjk 12/21/2025||
Yes, in fact they might be useful for chemistry simulation long before they are useful for cryptography. Simulations of quantum systems inherently scale better on quantum hardware.

https://en.wikipedia.org/wiki/Quantum_computational_chemistr...

GrilledChips 12/22/2025||
More recently it's turned out that quantum computers are less useful for molecular simulation than previously thought. See: https://www.youtube.com/watch?v=pDj1QhPOVBo

The video is essentially an argument from the software side (ironically she thinks the hardware side is going pretty well). Even if the hardware wasn't so hard to build or scale, there are surprisingly few problems where quantum algorithms have turned out to be useful.

comicjk 12/22/2025|||
It is tough to beat classical computers. They work really well, and a huge amount of time (including some of mine) has gone into developing fast algorithms for them to do things they're not naturally fast at, such as quantum chemistry.
oasisaimlessly 12/22/2025|||
At 15:00, she says "quantum computers are surprisingly good at [...] quantum simulations [of electron behavior]", which would seem to contradict you.
smurda 12/21/2025|||
One theoretical use case is “Harvest Now, Decrypt Later” (HNDL) attacks, or “Store Now, Decrypt Later” (SNDL). If an oppressive regime saves encrypted messages now, they can decrypt later when QCs can break RSA and ECC.

It's a good reason to implement post-quantum cryptography.

Wasn't sure if you meant crypto (btc) or cryptography :)

GrilledChips 12/22/2025||
I will never get used to ECC meaning "Error Correcting Code" or "Elliptic Curve Cryptography." That said, this isn't unique to quantum expectations. Faster classical computers or better classical techniques could make various problems easier in the future.
OJFord 12/22/2025||
What do you want it to mean?
silon42 12/22/2025||
I think he meant Error Correcting Code (just this, not this "or" ECCrypto)
layer8 12/21/2025|||
From TFA: ‘One more time for those in the back: the main known applications of quantum computers remain (1) the simulation of quantum physics and chemistry themselves, (2) breaking a lot of currently deployed cryptography, and (3) eventually, achieving some modest benefits for optimization, machine learning, and other areas (but it will probably be a while before those modest benefits win out in practice). To be sure, the detailed list of quantum speedups expands over time (as new quantum algorithms get discovered) and also contracts over time (as some of the quantum algorithms get dequantized). But the list of known applications “from 30,000 feet” remains fairly close to what it was a quarter century ago, after you hack away the dense thickets of obfuscation and hype.’
GrilledChips 12/22/2025||
It turns out they're not so useful for chemistry. https://www.youtube.com/watch?v=pDj1QhPOVBo
hattmall 12/22/2025||
I believe the primary most practical use would be compression. Devices could have quantum decoder chips that give us massive compression gains which could also massively expand storage capacity. Even modest chips far before the realization of the scale necessary for cryptography breaking could give compression gains on the order of 100 to 1000x. IMO that's the real game changer. The theoretical modeling and cryptography breaking that you see papers being published on is much further out. The real work that isn't being publicized because of the importance of trade secrets is on storage / compression.
askl 12/22/2025|||
Someone just has to figure out how to actually implement middle out compression on a quantum computer.
Terr_ 12/22/2025||||
> compression gains on the order of 100 to 1000x.

This feels like woo-woo to me.

Suppose you're compressing the text of a book: How would a quantum processor let you get a much better compression ratio, even in theory?

If you're mistakenly describing the density of information on some kind of physical object, that's not data compression, that's just a different storage medium.

cubefox 12/22/2025|||
Pretty sure quantum algorithms can't be used for compression.
bahmboo 12/21/2025||
I particularly like the end of the post where he compares the history of nuclear fission to the progress on quantum computing. Traditional encryption might already be broken but we have not been told.
littlestymaar 12/21/2025||
I really doubt we are anywhere close to this when there has been no published legit prime factorization beyond 21: https://eprint.iacr.org/2025/1237.pdf

Surely if someone managed to factorize a 3 or 4 digits number, they would have published it as it's far enough of weaponization to be worth publishing. To be used to break cryptosystems, you need to be able to factor at least 2048-digits numbers. Even assuming the progress is linear with respect to the number of bits in the public key (this is the theoretical lower bound but assume hardware scaling is itself linear, which doesn't seem to be the case), there's a pretty big gap between 5 and 2048 and the fact that no-one has ever published any significant result (that is, not a magic trick by choosing the number in a way that makes the calculation trivial, see my link above) showing any process in that direction suggest we're not in any kind of immediate threat.

The reality is that quantum computing is still very very hard, and very very far from being able what is theoretically possible with them.

bawolff 12/21/2025|||
In a world where spying on civilian communication of adversaries (and preventing spying on your own civilians) is becoming more critical for national security interests, i suspect that national governments would be lighting more of a fire if they believe their opponents had one.
mvkel 12/21/2025|||
They absolutely are. NSA is obsessed with post-quantum projects atm
GrilledChips 12/22/2025|||
tbh they could just be pushing for people to adopt newer, less-tested, weaker algorithms. switch from something battle-hardened to the QuantResist2000 algorithm which they've figured out how to break with lattice reduction and a couple of GPUs like those minecraft guys did.
kibwen 12/22/2025||
Hybrid approaches are at least as strong as their strongest algorithm. You don't need to trust me on this, it's extremely simple to derive this principle yourself from basic knowledge of cryptography.
tyre 12/21/2025|||
But is this because they are already needed or because they want to preserve encryption for past and present data post-quantum?
mvkel 12/22/2025||
The latter
the8472 12/21/2025|||
NSA is pushing for PQ algos.
throw310822 12/22/2025||
So you have one of the scientists at the forefront of quantum computing theory telling you that he has no idea if quantum computing is already in a much more advanced state that he himself knows about?

If results in quantum computing would start to "go dark", unpublished in scientific literature and only communicated to the government/ military, shouldn't he be one of the first to know or at least notice?

cubefox 12/22/2025||
This sounds slightly alarming:

> I’m going to close this post with a warning. When Frisch and Peierls wrote their now-famous memo in March 1940, estimating the mass of Uranium-235 that would be needed for a fission bomb, they didn’t publish it in a journal, but communicated the result through military channels only. As recently as February 1939, Frisch and Meitner had published in Nature their theoretical explanation of recent experiments, showing that the uranium nucleus could fission when bombarded by neutrons. But by 1940, Frisch and Peierls realized that the time for open publication of these matters had passed.

> Similarly, at some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already. This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems, a process that I’m grateful is already underway.

Does anyone know how much underway it is? Do we need to worry that the switch away from RSA won't be broadly deployed before quantum decryption becomes available?

JanisErdmanis 12/22/2025|
From analytical arguments considering a rather generic error type, we already know that for the Shor algorithm to produce a useful result, the error rate with the number of logical qubits needs to decrease as ~n^(-1/3), where `n` is the number of bits in the number [1].

This estimate, however, assumes that interaction can be turned on between arbitrary two qubits. In practice, we can only do nearest-neighbour interactions on a square lattice, and we need to simulate the interaction between two arbitrary qubits by repeated application of SWAP gates, mangling the interaction through as in the 15th puzzle. This two-qubit simulation would add about `n` SWAP gates, which would then multiply the noise factor by the same factor, hence now we need an error rate for logical qubits on a square lattice to be around ~n^(-4/3)

Now comes the error correction. The estimates are somewhat hard to make here, as they depend on the sensitivity of the readout mechanism, but for example let’s say a 10-bit number can be factored with a logical qubit error rate of 10^{-5}. Then we apply a surface code that scales exponentially, reducing the error rate by 10 times with 10 physical qubits, which we could express as ~1/10^{m/10}, where m is the number of physical qubits (which is rather optimistic). Putting in the numbers, it would follow that we need 40 physical qubits for a logical qubit, hence in total 400k physical qubits.

That may sound reasonable, but then we made the assumption that while manipulating the individual physical qubits, decoherence for each individual qubit does not happen while they are waiting for their turn. This, in fact, scales poorly with the number of qubits on the chip because physical constraints limit the number of coaxial cables that can be attached, hence multiplexing of control signals and hence the waiting of the qubits is imminent. This waiting is even more pronounced in the quantum computer cluster proposals that tend to surface sometimes.

[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3

ProllyInfamous 12/22/2025||
Vanderbilt University [0] is about to open a "quantum research graduate studies" campus, somewhere near Chattanooga, Tennessee.

I have a degree in chemistry from that institution, and don't have a clue what this means beyond the $1,000,000,000 economic impact this facility is supposed to make upon our fair city, over the next decade.

[•] <https://quantumzeitgeist.com/vanderbilt-university-quantum-q...>

[0] In partnership with our government-subsidized "commercial quantum-ready" fiber network, EPB

jasonmarks_ 12/21/2025||
Zero money take: quantum computing looks like a bunch of refrigerator companies.

The fact that error correction seems to be struggling implies unaccounted for noise that is not heat. Who knows maybe gravitational waves heck your setup no matter what you do!

ktallett 12/21/2025||
As someone that works in quantum computing research both academic and private, no it isn't imminent in my understanding of the word, but it will happen. We are still at that point whereby we are comparable to 60's general computing development. Many different platforms and we have sort of decided on the best next step but we have many issues still to solve. A lot of the key issues have solutions, the problem is more getting everyone to focus in the right direction, which also will mean when funding starts to focus in the right direction. There are snake oil sellers right now and life will be imminently easier when they are removed.
ecshafer 12/21/2025||
Wouldn't the comparison be more like the 1920s for computing. We had useful working computers in the 1940s working on real problems doing what was not possible before hand. By the 1950s we had computers doing Nuclear bomb simulations and the 1960s we had computers in banks doing accounting and inventory. So we had computers by then, not in homes, but we had them. In the 1920s we had mechanical calculators and theories on computation emerging but not a general purpose computer. Until we have a quantum computer doing work at least at the level of a digital computer I can't really believe it being the 1960s.
ktallett 12/21/2025||
I'm not going to pretend that I am that knowledgeable on classic computing history from that time period. I was primarily going off the fact the semi conductor was built in the late 40's, and I would say we have the quantum version of that in both qubit and photonic based computing and they work and we have been developing on them for some time now. The key difference is that there are many more steps to get to the stage of making them useful. A transistor becamse useful extremely quickly and well in Quantum computing, these just haven't quite yet.
tdeck 12/22/2025||
Relay based electromechanical computers were doing real, practical work in the early 1940s.
tokai 12/21/2025|||
Not to be snarky, but how is it comparable to 60's computing? There was a commercial market for computers and private and public sector adoption and use in the 60s.
ktallett 12/21/2025||
There is private sector adoption and planning now of specific single purpose focused quantum devices in military and security settings. They work and exist although I do not believe they are installed. I may be wrong on the exact date, as my classical computer knowledge isn't spot on. The point I was trying to make was that we have all the bits we need. We have the ability to make the photonic quantum version (which spoiler alert is where the focus needs to move to over the qubit method of quantum computing) of a transistor, so we have hit the 50's at least. The fundamentals at this point won't change. What will change is how they are put together and how they are made durable.
GrilledChips 12/22/2025|||
In the 60's we actually had extremely capable, fully-developed computers. Advanced systems like the IBM System360 and CDC 6600.

Quantum computing is currently stuck somewhere in the 1800's, when a lot of the theory was still being worked out and few functional devices had even been constructed.

ktallett 12/22/2025||
Oh no, that isn't factually correct. We have the theory. The theory is viable and provable and shown in both of the major branches of quantum computing, qubit and photonic. The key issue as I say is each has multiple 'architectures' for lack of a better term for each branch. We do have functional devices, it is just the function they provide is useless as we can already do it on a laptop. Which partially is a massive issue as Quantum computing almost needs to skip ahead of those development years classical computing was afforded.
noname120 12/22/2025|||
What makes it more akin to 60’s general computing development than 60’s fusion power development (that is still ongoing!)? The former is incremental, the latter requires major technological breakthroughs before reaching any sort of usefulness. Quantum computing feels more like there are roadblocks that can’t be ironed out without several technological revolutions.
andsoitis 12/21/2025|||
> it will happen.

If you were to guess what reasons there might be that it WON’T happen, what would some of those reasons be?

ktallett 12/21/2025||
So in my view, the issues I think about now are:

- Too few researchers, as in my area of quantum computing. I would state there is one other group that has any academic rigour, and is actually making significant and important progress. The two other groups are using non reproducible results for credit and funding for private companies. You have FAANG style companies also doing research, and the research that comes out still is clearly for funding. It doesn't stand up under scrutiny of method (there usually isn't one although that will soon change as I am in the process of producing a recipe to get to the point we are currently at which is as far as anyone is at) and repeatability.

- Too little progress. Now this is due to the research focus being spread too thin. We have currently the classic digital (qubit) vs analogue (photonic) quantum computing fight, and even within each we have such broad variations of where to focus. Therefore each category is still really just at the start as we are going in so many different directions. We aren't pooling our resources and trying to make progress together. This is also where a lack of openness regarding results and methods harms us. Likewise a lack of automation. Most significant research is done by human hand, which means building on it at a different research facility often requires learning off the person who developed the method in person if possible or at worse, just developing a method again which is a waste of time. If we don't see the results, the funding won't be there. Obviously classical computing eventually found a use case and then it became useful for the public but I fear we may not get to that stage as we may take too long.

As an aside, we may also get to the stage whereby, it is useful but only in a military/security setting. I have worked on a security project (I was not bound by any NDA surprisingly but I'm still wary) featuring a quantum setup, that could of sorts be comparable to a single board computer (say of an ESP32), although much larger. There is some value to it, and that particular project could be implemented into security right now (I do not believe it has or will, I believe it was viability) and isn't that far off. But that particular project has no other uses, outside of the military/security.

wasabi991011 12/23/2025||
> I would state there is one other group that has any academic rigour, and is actually making significant and important progress.

I agree there's a lot of poorly written papers and unrigorous research. I'm at the beginning of my PhD, so I still don't quite have every group vetted yet. Could you share your area, and what groups to follow (yours and the other good one)?

BlackFly 12/22/2025|||
Eh, quantum computing could very well be the next nuclear fusion where every couple of years forever each solved problem brings us to "We're 5 years away!"

Yet, for sure we should keep funding both quantum computing and nuclear fusion research.

layer8 12/21/2025|||
What makes you confident that it will happen?
mvkel 12/21/2025||
The people who are inside the machine are usually the least qualified to predict the machine's future
uejfiweun 12/22/2025||
What an idiotic take. The LEAST qualified? Should I go ask some random junkie off the street where quantum computing will be in 5 years?
charcircuit 12/22/2025||
Yes. Apply wisdom of the crowd.
ursAxZA 12/22/2025|
Which one ends up being more accurate — quantum-computing forecasts or fashion-magazine trend predictions?
More comments...