FAQ on Microsoft's topological qubit thing
(scottaaronson.blog)311 points by ingve a day ago
311 points by ingve a day ago
The quote that struck me was
> I foresee exciting times ahead, provided we still have a functioning civilization in which to enjoy them.
If you are shocked by this, I suggest not reading his other recent topics.
There seems to be a bit of a disconnect between the first and the second sentence (to my completely uneducated mind).
If topological qubits turn out to be so much more reliable then it doesn't really matter how much time was spent trying to make other types of qubits more reliable. It's not really a head start, is it?
Or are there other problems besides preventing unwanted decoherence that might take that much time to solve?
The point I think is this: if topological qubits are similar to other types of qubits, then investing in them is going to be disappointing because the other approaches have so much more work put into them.
So, he is saying that this approach will only pay off if topological qubits are a fundamentally better approach than the others being tried. If they turn out to be, say, merely twice as good as trapped ion qubits, they'll still only get to the achievements of current trapped ion designs with another, say, 10-15 years of continued investment.
The whole point though is that they are step function better than traditional qubits, in a way that is simply a type error to compare.
The utility of traditional qubits depends entirely on how reliable and long-lived they are, and how to can scale to larger numbers of qubits. These topological qubits are effectively 100% reliable, infinite duration, and scale like semiconductors. According to the marketing literature, at least…
Yeah I mean that's exactly what MS are talking about, only requiring 1/20 of the checksum qubits or something.
https://www.ft.com/content/a60f44f5-81ca-4e66-8193-64c956b09...
except we don't need opinions, we need proof and that is what we are lacking.
The only expert in the FT article is Dr. Sankar Das Sarma who (from Wikipedia)
"In collaboration with Chetan Nayak and Michael Freedman of Microsoft Research, Das Sarma introduced the ν = 5 / 2 topological qubit in 2005"
So you might understand why this FT article is not adding anything to the discussion, which does not discuss the theory but rather MS's claim of an actual breakthrough.They show a chip, we'd like proof of what the chip actually does
A very important statement is in the peer review file that everyone should read:
"The editorial team wishes to point out that the results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices. The work is published for introducing a device architecture that might enable fusion experiments using future Majorana zero modes."
https://static-content.springer.com/esm/art%3A10.1038%2Fs415...
Thanks for your interest. I'm part of the Microsoft team. Here are a couple of comments that might be helpful:
1) The Nature paper just released focuses on our technique of qubit readout. We interpret the data in terms of Majorana zero modes, and we also do our best to discuss other possible scenarios. We believe the analysis in the paper and supplemental information significantly constrains alternative explanations but cannot entirely exclude that possibility.
2) We have previously demonstrated strong evidence of Majorana zero modes in our devices, see https://journals.aps.org/prb/pdf/10.1103/PhysRevB.107.245423.
3) On top of the Nature paper, we have recently made addition progress which we just shared with various experts in the field at the Station Q conference in Santa Barbara. We will share more broadly at the upcoming APS March meeting. See also https://www.linkedin.com/posts/roman-lutchyn-bb9a382_interfe... for more context.
>signal-to-noise ratio of 1
Hmmm.. appreciate the honesty :)
That's from the abstract of the upcoming conference talk (Mar14)
>Towards topological quantum computing using InAs-Al hybrid devices
Presenter: Chetan Nayak (Microsoft)
The fusion of non-Abelian anyons is a fundamental operation in measurement-only topological quantum computation. In one-dimensional topological superconductors, fusion amounts to a determination of the shared fermion parity of Majorana zero modes. Here, we introduce a device architecture that is compatible with future tests of fusion rules. We implement a single-shot interferometric measurement of fermion parity in indium arsenide-aluminum heterostructures with a gate-defined superconducting nanowire . The interferometer is formed by tunnel-coupling the proximitized nanowire to quantum dots. The nanowire causes a state-dependent shift of these quantum dots' quantum capacitance of up to 1fF. Our quantum capacitance measurements show flux h/2e-periodic bimodality with a signal-to-noise ratio of 1 in 3.6 microseconds at optimal flux values. From the time traces of the quantum capacitance measurements, we extract a dwell time in the two associated states that is longer than 1ms at in-plane magnetic fields of approximately 2T. These measurements are discussed in terms of both topologically trivial and non-trivial origins. The large capacitance shift and long poisoning time enable a parity measurement with an assignment error probability of 1%.
As the recent results from CS and math on the front pages have shown, one doesn't have to be unknown or underfunded in order to produce verifiable breakthroughs, but it might help..
Seems like John Baez didn't notice those lines in the peer review either
https://mathstodon.xyz/@johncarlosbaez/114031919391285877
TIL: read the peer review first
Microsoft claims that it works. However, the Nature reviewers apparently do not yet feel comfortable vouching for this claim.
Another recent writeup that adds some nuance to this (and other claims), summarizing the quantum-skeptic positions:
https://gilkalai.wordpress.com/2025/02/17/robert-alicki-mich...
I think that Kalai here is very seriously understating how fringe/contrarian his views are. He's not merely stating that there's too much optimism about potential future results, or that there's some kind of intractable theoretical or practical bottleneck that we'll soon reach and won't be able to overcome. He's saying that any kind of quantum advantage—a thing that numerous experiments, from different labs in academia and industry, using a wide variety of approaches, have demonstrated over the past decade—is impossible, and therefore all of those experimental results were wrong and need to be retracted. His position was scientifically respectable back when the possibility he was denying hadn't actually happened yet, but I don't think it is anymore.
I think he is playing it smart. The more fringe/contrarian it is, the bigger the payoff if he turns out to be right. So far nothing of much use came out of QC, and if nothing will, then the hype pendulum swings back at some point, and he will win big. If not, his position will seem silly, but not much risk to his reputation, being skeptic of a new model is intellectually fine and even courageous if it goes against the mainstream. I see it as those who called out "replication crisis" in social sciences.
I think what many people are missing in the discussion here is that topological qbits are essentially a different type of component. This is analogous to relay-triode-transistor technology progression.
It is speculation still whether the top-q approach will be effective, but there are significant implications if it is. Scalability, reliability, and speed are all significant aspects on the table here.
While other technologies have a significant head start, much of the “head start” is transferrable knowledge, similar to the relay-triode-transistor-integrated circuit progression. Each new component type multiplies the effectiveness of the advances made by the previous generation of technologies, it doesn’t start over.
IF the topological qubits can be made to be reliable and they live up to their scalability promises, it COULD be a revolutionary step, enabling exponential gains in cost, scalability, and capability. IF.
Recent and related:
Microsoft unveils Majorana 1 quantum processor - https://news.ycombinator.com/item?id=43104071 - Feb 2025 (150 comments)
topological analytics shows that under like-like exchanges multiple distinct pathways exist in 2D (in 3D topology does not have distinct pathways for these exchanges). this permits real anyon particles to exist when the physics is confined to 2D within quantum limits such as in a layer of graphene. certain configurations of layers (“moire materials”) can be made periodic to provide a suitable scale lattice for anyons to localize and adopt particular quantum states
anyons lie somewhere between fermions and bosons in their state occupancy and statistics - no 2 fermions may occupy the same state, bosons can all occupy the same state, anyons follow rational number patterns eg up to 2 anyons can occupy 3 states
I enjoy the quality of "it's too early to say" in Aaronson's writing. It won't stop share price movement or hopeless optimism amongst others.
I do wonder if he is running a simple 1st order differential on his own beliefs. He certainly has the chops here, and self introspection on the trajectory of highs and lows and the trends would interest me.
A bit off topic - I really like Scott Aaronson and his blog, but hate the comment section - he engages a lot with the comments (which is great!) but it's really hard to follow, as each comment is presented as a new message.
I made this small silly chrome extension to re-structure the comments to a more readable format - if anyone is interested
I find the opposite, he often makes some ridiculous claim in the post, the comments (the ones he lets through) rightfully point out how wrong he was, then he cherry-picks and engages one of the more outrageous comments, so a superficial observer is left with the impression that the original claim was OK.
This experiment only created one qubit, so no.
The experiment with lots of qubits... technically yes they can do things. I think the factoring record is 21. But you might be disappointed a) when you see that most algorithms using quantum computers require conventional computational to transform the problem before and after the quantum steps b) when you learn we only have a few quantum algorithms, the are not general calculation machines and c) when you look under the hood and see that the error correcting stuff makes it actually kinda hard to tell how much is really being done by the actual quantum device.
Thanks for the reply. I've always been a bit puzzled from my limited knowledge of quantum mechanics as to how they are supposed to work. I mean you make a measurement on a quantum system and sure the probability amplitude is the result of adding up all sorts of possible paths but you still only get the one measurement out which I'm not sure how that's supposed to tell you much. All a bit beyond me.
Does https://scottaaronson.blog/?p=208 help?
(Also, the factoring-21 result is from 2012, and may have been surpassed since then depending on how you count. Recent quantum-computing research has focused less on factoring numbers and more on problems like random circuit sampling where it's easier to get meaningful results with the noisy intermediate-scale machines we have today. Factoring is hard mode because you have to get it exactly right or else it's no good at all.)
Helps a bit thanks. I guess it's a bit like in x ray crystal diffraction you get light and dark patches depending on how the photon paths interacting with trillions of atoms add up, with a quantum computer you'd get light or dark outputs depending on how the amplitudes of trillions of calculations add up?
you typically have to sample multiple times until you can build up the distribution that you can convert to a solution to your problem.
We should celebrate this for what it is: another brick in the wall that we’re building to achieve practical quantum computing systems.
That's the best-case scenario. It remains possible that topological qubits, even if they are theoretically achievable, will turn out to be a dead end engineeringwise. Presumably competing quantum computing labs think this is likely, since they're not working on topological qubits; only Microsoft thinks they'll end up being important.
Yes, just like putting two bricks onto each other is a first step to the moon.
So far, the only known algorithm relevant to AI that would run faster on a theoretical quantum computer is linear search, where quantum computers offer a modest speedup (linear search is O(n) on a classical computer, while Grover's algorithm is O(sqrt(n)) for quantum computers - this means that for a list of a million elements, you can scan it in 1 000 steps on a quantum computer instead of 1 000 000 steps on a classical one).
However, even this is extremely theoretical at this time - no quantum computer built so far can execute Grover's algorithm, they are not reliable enough to get any result with probability higher than noise, and anyway can't apply the amount of steps required for even a single pass without losing entanglement. So we are still very very very far away from a quantum computer that could reach anything like the computing performance of a single consumer-grade GPU. We're actually very far away from a quantum computer that even reaches the performance of a hand calculator at this time.
Pure Quantum Gradient Descent Algorithm and Full Quantum Variational Eigensolver https://arxiv.org/abs/2305.04198
There is not an "OS" or anything even remotely like it. For now these things behave more like physics experiments than computers.
You can play around with "quantum programming" through (e.g.) some of IBM's offerings and there has been work on quantum programming languages like q# from Microsoft but its unclear (to me) how useful these are.
that's not the way to think about quantum computing AFAIK.
Think of these as accelerators you use to run some specific algorithm the result of which your "normal" application uses.
More akin to GPUs: your "normal" applications running on "normal" CPUs offload some specific computation to the GPU and then use the result.
> "an OS for those"
Or at least an OS driver for the devices supporting quantum computing if/when they become more standard.
Other than fast factorization and linear search, is there anything that Quantum Computing can do? Those do seem important, but limited in scope - is this a solution in search of a problem?
I've heard it could get us very accurate high-detail physics simulations, which has potential, but don't know if that's legit or marketing BS.
Hey anyons 2D electron gas. Wrote about it a while ago and get downvoted!
I had a thought while reading this:
Are we, in fact, in the very early stages of gradient descent toward what I want to call "software defined matter?"
If we're learning to make programmable quantum physics experiments and use them to do work, what is that the very beginning of? Imagine, say, 300 years from now.
TVs : displays already are software defined matter. But ya.
This seems quite a bold claim that Microsoft proofed that Neutrinos are Majorana particles...
The chip is literally called Microsoft Majorana 1
Indeed Majorana fermions are completely unseen/unconsidered outside of Neutrinos. In fact all Standard Model fermions except Neutrinos are proven to be Dirac fermions
You are confusing quasiparticles with fundamental particles. If we were to observe a Majorana topological state, this would have no bearing on the properties of the neutrino.
Also to say Majorana fermions are not considered outside of neutrinos is a patently ridiculous and ignorant statement. There is absolutely nothing in physics to say the only particle that can possibly be Majorana is a neutrino. For example, there have been theories of Majorana dark matter which are a consideration of fundamental Majorana particles outside of neutrinos.
money quote: