See also my CV or the list of papers with bibliographic info only.
D. Gottesman, "Class of quantum error-correcting codes saturating the quantum Hamming bound," Phys. Rev. A 54, 1862-1868 (1996). Also quant-ph/9604038.
This is probably my most important paper. In it I develop the formalism of stabilizer codes. I also describe a class of codes to correct one error using 2j qubits and some miscellaneous other results. It is probably not the best introduction to stabilizer codes, however - for that I would recommend my thesis.
D. Gottesman, "Pasting quantum codes," quant-ph/9607027.
A lightweight paper. It describes a method for constructing new quantum codes from old ones. The referee suggested a generalization, which will be in the revised version of the paper if I ever finish it. Meanwhile, the extended method appears in section 3.5 of my thesis.
R. Cleve and D. Gottesman, "Efficient computations of encoding for quantum error correction," Phys. Rev. A 56, 76-82 (1997). Also quant-ph/9607030.
This paper shows a pretty good algorithm for encoding stabilizer codes. It also develops a standard form for stabilizer codes which is sometimes useful. There is an error in the last stage of the construction: all controlled-phase gates below the diagonal should be removed.
D. Gottesman, A. Kitaev, and J. Preskill, "Encoding a Qubit in an Oscillator," Phys. Rev. A 64, 012310 (2001) (21 pages). Also quant-ph/0008040.
This paper investigates coding and fault-tolerance for systems with continuous variables. The codes are designed to protect against ubiquitous small shifts in amplitude and phase; the effect is much the same as digitizing an analog classical signal. Getting a universal set of fault-tolerant gates is pretty hairy in this case, even more so than usual, but otherwise the construction is fairly straightforward. These codes also suggest a way to perform quantum key distribution with continuous variables.
Maryam Mudassar, Riley W. Chien, and Daniel Gottesman, "Encoding Majorana Codes," arXiv:2402.07829 [quant-ph].
In this paper, we show how to encode Majorana stabilizer codes using unitary Majorana Clifford circuits. Majorana stabilizer codes encode qubits in physical fermion degrees of freedom. Compared to the similar result for qubit stabilizer codes, the complication of Majorana codes is that there is an additional parity constraint on the gates. One effect of this is that some Majorana codes cannot be encoded without the use of an ancilla mode.
A. Ambainis, D. Gottesman, "The Minimum Distance Problem for Two-Way Entanglement Purification," IEEE Trans. Info. Theory 52, issue 2, 748-753 (2006), DOI: 10.1109/TIT.2005.862089. Also quant-ph/0310097.
This paper discusses not quantum error correction, per se, but instead a close analogue using back-and-forth communication. It has long been known that adding a backwards classical channel helps transmit quantum information at a higher rate when we wish to correct against the most probable type of errors. In this paper, we show that a backwards classical channel also helps protect against worst-case errors.
C. Crepeau, D. Gottesman, A. Smith, "Approximate quantum error-correcting codes and secret sharing schemes," Proc. Eurocrypt 2005, p. 285 (Springer-Verlag, 2005), DOI: 10.1007/11426639_17. Also quant-ph/0503139.
By applying the usual conditions for quantum error-correction, it is straightforward to show that for a code consisting of n registers, it is impossible to correct general errors on n/4 registers if we don't know which registers are wrong. It turns out, however, that this is only true if we insist that our code correct the errors exactly. If we only ask for approximate correction, we show in this paper that it is possible to correct errors on up to n/2 registers with arbitrarily high accuracy. The codes we present use large registers, so are probably not terribly useful for communication, but have applications for cryptography, for instance in improved multiparty quantum computations.
Jimin Yi, Weicheng Ye, Daniel Gottesman, Zi-Wen Liu, "Complexity and order in approximate quantum error-correcting codes," arXiv:2310.04710 [quant-ph].
An exact quantum error-correcting code has a list of errors that it can correct, and if one of those errors occurs, the state is corrected perfectly (assuming we have perfect quantum computers). In an approximate code, if a correctable error is identified, the code still cannot correct it perfectly, only to a good degree of approximation. How good is good enough? In this paper, we propose an answer to this question in terms of a new quantity called the subspace variance and show that codes with a sufficiently low subsystem variance (below k/n for k logical qubits and n physical qubits) need nontrivial circuits to encode them. This bound and the associated complexity transition seem connected with transitions in behavior in a number of situations and physical systems, suggesting this is in fact a reasonable answer to the question of how good an approximate code should be.
D. Gottesman, "Stabilizer codes and quantum error correction," quant-ph/9705052, Caltech Ph.D. thesis.
This is my Ph.D. thesis. I designed it to serve as a review on quantum error correction. Unfortunately, I also decided to stick in a bunch of results that were too small to publish by themselves. My advice is to skip the boring parts. The first three chapters serve as a pretty good introduction to quantum error-correcting codes. There are a few errata.
D. Gottesman, "An Introduction to Quantum Error Correction," in Quantum Computation: A Grand Mathematical Challenge for the Twenty-First Century and the Millennium, ed. S. J. Lomonaco, Jr., pp. 221-235 (American Mathematical Society, Providence, Rhode Island, 2002). Also quant-ph/0004072.
This is designed to be a short introduction to quantum error correction. It assumes a minimal knowledge of quantum mechanics, but not much else.
D. Gottesman, "Quantum Error Correction and Fault-Tolerance," in Encyclopedia of Mathematical Physics, eds. J.-P. Francoise, G. L. Naber and S. T. Tsou, Oxford: Elsevier, 2006 (ISBN 978-0-1251-2666-3), vol. 4, pp. 196-201. Also quant-ph/0507174.
This is a short article written for the Encyclopedia of Mathematical Physics. As such, it is more encyclopedic than my other introductory error correction article, and thus less pedagogical. However, it covers fault tolerance as well as plain quantum error correction.
D. Gottesman, "An Introduction to Quantum Error Correction and Fault-Tolerant Quantum Computation," in Quantum Information Science and Its Contributions to Mathematics, Proceedings of Symposia in Applied Mathematics 68, pp. 13-58 (Amer. Math. Soc., Providence, Rhode Island, 2010). Also arXiv:0904.2557 [quant-ph].
This paper gives a detailed introduction to quantum error correction and the threshold theorem. It includes my previous introduction on quantum error correction, but adds lots of new material, including fault-tolerant gate constructions and a full proof of the threshold theorem.
D. Gottesman, "A theory of fault-tolerant quantum computation," Phys. Rev. A 57, 127-137 (1998). Also quant-ph/9702029.
This is a significant paper. It develops methods for discussing fault-tolerant computation and shows how to do universal fault-tolerant computation for any stabilizer code. The methods are actually applicable for a wide variety of problems beyond error-correction. Basically anything which uses just Hadamards, CNOTs, and measurements can be usefully analyzed using these methods.
D. Gottesman, "Fault-tolerant quantum computation with higher-dimensional systems," in Quantum Computing and Quantum Communications, Proceedings of the 1st NASA International Conference on Quantum Computing and Quantum Communications (QCQC), Palm Springs, California, ed. C. Williams, pp. 302-313 (New York, NY, Springer-Verlag, 1998). Also Chaos, Solitons, and Fractals 10, 1749-1758 (1999). Also quant-ph/9802007.
This paper extends the results of my earlier paper on fault-tolerant computation to higher-dimensional systems than qubits. There is also a refinement of the original procedure for universal computation which allows a shorter proof and often less overhead in the computer. Another useful feature of the new proof is that is allows fault-tolerant conversion of data between different codes, even if they have different block sizes. Otherwise, it is a pretty straightforward generalization of the earlier results.
D. Gottesman and I. Chuang, "Demonstrating the viability of universal quantum computation using teleportation and single-qubit operations," Nature 402, 390-393 (1999). Also "Quantum Teleportation is a Universal Computational Primitive," quant-ph/9908010.
This paper shows how to perform certain quantum gates by teleporting qubits through a specially prepared ancilla instead of a standard EPR pair. Some additional processing must be done afterwards, but in certain cases, the "correction" gates needed are easier to perform than the target gate. This may be useful for building quantum computers and allows more direct fault-tolerant constructions of a number of gates.
D. Gottesman, "Fault-Tolerant Quantum Computation," Physics in Canada 63, No. 4, 183-189 (Oct.-Dec. 2007). Also quant-ph/0701112.
This is a short introduction to research in fault tolerance. It briefly covers the basic fault-tolerant protocol for the seven-qubit code and discusses some more recent research on tradeoffs between different properties frequently assumed for fault-tolerant quantum computers.
V. Veitch, S. A. Hamed Mousavian, D. Gottesman, J. Emerson, "The Resource Theory of Stabilizer Computation," New J. Phys. 16, 013009 (2014), DOI: 10.1088/1367-2630/16/1/013009. Also arXiv:1307.7171 [quant-ph].
In order to get a universal set of fault-tolerant quantum gates, it is usually necessary to use a trick involving some particular special states called "magic states", which can then be interacted with the data undergoing computation (for instance, using teleportation) to perform additional gates. Making these magic states can be difficult, so frequently the process begins by making low-quality magic states (ones with many errors) and then processing them to get better magic states. In this paper, we have begun the systematic study of magic states as a resource. In particular, we discuss two different functions which quantify the amount of "magic" in a state, and thus more precisely measure how useful a particular magic state is and how many are needed to make a better magic state.
D. Gottesman, L. L. Zhang, "Fibre bundle framework for unitary quantum fault tolerance," arXiv:1309.7062 [quant-ph].
Two of the main classes of fault-tolerant gates are transversal gates and topological gates. Transversal gates work by doing only gates between corresponding qubits in different blocks of a quantum error-correcting code, which limits the extent to which errors can spread within the code. Topological gates usually work by moving around some particles ("anyons") which have unusual properties which guarantee that small distortions of the paths of the particles will still lead to the same quantum gate. On the surface, these seem like very different approaches to fault tolerance, but in this paper we show that they can be unified into a single picture. In particular, transversal gates can also be seen as following a path in the space of possible codes, and the fault tolerance of the gate guarantees that small distortions of this path lead to the same quantum gate, just as with topological gates.
Shawn X. Cui, Daniel Gottesman, Anirudh Krishna, "Diagonal gates in the Clifford hierarchy", Phys. Rev. A 95, 012329 (2017), DOI: 10.1103/PhysRevA.95.012329. Also arXiv:1608.06596 [quant-ph].
Stabilizer codes have a close relation to a particular set of quantum gates known as the Clifford group, which are all that is needed for encoding and decoding stabilizer codes. These are also usually the easiest gates to do on stabilizer codes in a fault-tolerant way, but unfortunately are not sufficient to fully realize the computational power of a quantum computer. Ike Chuang and I introduced a hierarchy of gates that are progressively more complicated than the Clifford group. These gates have fault-tolerant realizations that are more complicated than Clifford group gates but are still achievable (in contrast to totally arbitrary gates). However, it is still unknown exactly which gates are in this hierarchy and where they sit in the hierarchy. In this paper, we answer the question for diagonal gates.
Daniel Gottesman, "Fault tolerance in small experiments," arXiv:1610.03507 [quant-ph].
As experimental realizations of quantum computers get larger, more and more theoretical protocols can be demonstrated on them. It is actually surprisingly subtle to define what makes a successful experimental demonstration of fault-tolerant quantum computation, and what needs to be shown. In this paper, I give an answer which should be helpful in guiding experimentalists trying to do these demonstrations. Moreover, I give what I believe to be the smallest possible protocol that could demonstrate fault tolerance. Fault-tolerant experiments are particularly exciting because they are not just demonstrations of fault tolerance -- they have the potential to actually answer theoretical questions about the interplay of fault-tolerant protocols with the kinds of error that occur in real experiments.
Daniel Gottesman, "Opportunities and Challenges in Fault-Tolerant Quantum Computation," in The Physics of Quantum Information, ed. David Gross, Alexander Sevrin, and Peter Zoller, Proc. 28th Solvay Conf. Physics (World Scientific, 2023), pp. 287-309. Also arXiv:2210.15844 [quant-ph].
In this paper, after a brief overview of some open problems in fault tolerance, I propose a new paradigm for designing fault-tolerant protocols. In the standard approach to fault tolerance, we pick a specific quantum error-correcting code and, while we may temporarily leave it to perform a fault-tolerant gate, we always return to the original code. We design fault-tolerant gates by limiting error propagation. My proposal is to instead let errors propagate and try to identify the spacetime origin of the error. We may also be constantly changing quantum codes. To handle all of this, we should try to treat space and time on as equal a footing as possible.
D. Gottesman, "Fault-Tolerant Quantum Computation with Local Gates," J. Modern Optics 47, 333-345 (2000). Also quant-ph/9903099.
This paper shows how to derive a threshold result for fault-tolerant computation when only local (nearest or next-to-nearest neighbor) gates are available in one, two, or three dimensions. This is not a difficult result for someone well-schooled in the threshold calculation, but there are a couple of pitfalls for the unwary.
P. Aliferis, D. Gottesman, and J. Preskill, "Quantum accuracy threshold for concatenated distance-3 codes," Quant. Information and Computation 6, No. 2, 97-165 (2006). Also quant-ph/0504218.
This paper gives a new proof of the threshold for fault-tolerant quantum computation. It is simpler than previous proofs, and also more general, applying to concatenation of codes that correct just one error or many errors. The proof suggests a way of counting possible errors to give a reasonable numerical value of the threshold. The paper is long, but (we hope) clear.
P. Aliferis, D. Gottesman, J. Preskill, "Accuracy threshold for postselected quantum computation," Quantum Information and Computation 8, No. 3, 181-244 (2008). Also quant-ph/0703264.
This paper extends our earlier paper on a proof for the accuracy threshold for fault-tolerant quantum computation. The previous paper proved the existence of a threshold for concatenated error-correcting codes, but didn't apply easily to an approach by Manny Knill which uses concatenated error-detecting codes instead to prepare reliable ancilla states for use in fault-tolerant protocols. Proving that Knill's idea actually works and getting a reasonable bound on the threshold achievable required a number of new techniques beyond those in our previous paper. With these techniques, we can now prove a threshold above 1/1000 (error rate per gate).
M. Ben-Or, D. Gottesman, A. Hassidim, "Quantum Refrigerator," arXiv:1301.1995 [quant-ph].
This paper considers fault-tolerant quantum computation when it is not possible to deliberately reset ancilla qubits during the computation. It turns out that there are three types of errors that can occur. The first class includes the depolarizing channel (which, with some probability, completely randomizes the state) and allows computation for only a logarithmic time in the number of qubits in the system. The second class includes the dephasing channel (which with some probability randomizes the phase) and allows quantum computation for a polynomial time. The third class includes the amplitude damping channel (which causes qubits to decay towards the 0 state) and allows computation for an exponential amount of time, since the noise itself allows us to reset qubits to 0.
D. Gottesman, "Fault-Tolerant Quantum Computation with Constant Overhead", Quant. Information and Computation 14, 1338-1371 (2014). Also arXiv:1310.2984 [quant-ph].
The standard form of the threshold theorem for fault-tolerant quantum computation states that the overhead, the number of physical qubits needed for each encoded qubit, increases (albeit slowly) as the computation gets larger. Standard fault-tolerant protocols require quite a large overhead even for small computers, which is inconvenient since large numbers of qubits are currently hard to come by experimentally. This raises the question of what the minimum possible overhead is for quantum computation. In this paper, I show that it need not increase as the computation gets larger and in fact can be arbitrarily small.
Noah Berthusen, Daniel Gottesman, "Partial Syndrome Measurement for Hypergraph Product Codes," arXiv:2306.17122 [quant-ph].
In this paper, we study hypergraph product codes where some fraction of the error syndrome bits are "masked", meaning we cannot measure them all the time. Instead we only measure them occasionally, and try to perform error correction with the remainder of the error syndrome in the meantime. We find that these codes still work even with rather large amounts of masking. The motivation for studying this situation is that it is a model for a situation where some syndrome bits involve only qubits which are close together, while others -- the masked bits -- involve qubits that are far apart and therefore take a long time to measure. Since there appears to still be a threshold, this suggests we might be able to reasonably use hypergraph product codes or other high-rate low-density parity check codes in a quantum computing architecture which is restricted to perform gates between physically nearby qubits, despite the fact that any layout for hypergraph product codes will have some syndrome bits involving distant qubits.
D. Gottesman and J. Preskill, "Secure Quantum Key Distribution Using Squeezed States," Phys. Rev. A 63, 022309 (2001) (18 pages). Also in Quantum Information with Continuous Variables, eds. S. L. Braunstein and A. K. Pati, pp. 317-356 (Boston, MA, Kluwer Academic Press, 2003). Also quant-ph/0008046.
This paper proves the security of a simple key distribution scheme using squeezed light. The proof uses the method of Shor and Preskill of taking a fully quantum scheme (using entanglement purification) and reducing it to an almost-classical scheme, such as BB84. In this case, the quantum code which reduces to the squeezed state scheme is the code for continuous variables given in our paper above.
D. Gottesman and H.-K. Lo, "From Quantum Cheating to Quantum Security," Physics Today 53, no. 11, 22-27 (Nov. 2000). Also quant-ph/0111100.
This is a popular article discussing quantum cryptology. It provides an overview, assuming you know basic quantum mechanics but not much else about the subject. Of necessity, it doesn't go into too much depth about anything, but simply touches on the major results in quantum cryptology as of 2000.
D. Gottesman, H.-K. Lo, "Proof of security of quantum key distribution with two-way classical communications," IEEE Trans. Info. Theory 49, 457-475 (2003). Also quant-ph/0105121.
Standard techniques of quantum key distribution (QKD) turn out to be related to quantum error-correcting codes, and by analyzing this connection, it is possible to prove the security of QKD. It is possible to improve quantum channel capacity by using classical communications from receiver to sender as well as from sender to receiver, and this paper shows how to transfer that advantage to quantum key distribution. As a result, this paper shows that a straightforward key distribution scheme can tolerate much higher error rates than previously known.
D. Gottesman, H.-K. Lo, N. Lutkenhaus, J. Preskill, "Security of Quantum Key Distribution with Imperfect Devices," Quantum Information and Computation 4, No.5, 325-360 (2004). Also quant-ph/0212066.
Quantum key distribution (QKD) can be used to establish a secret key in the presence of an eavesdropper. It has the advantage that, unlike most other applications of quantum information, it can be realized experimentally with today's technology. Of course, realistic systems suffer from imperfections, and the eavedropper could potentially take advantage of those imperfections to learn information about the final key. Previous security proofs covered a variety of types of imperfections, but not everything. This paper presents some new proof techniques that allow us to prove the security of QKD with pretty general kinds of small defects in the equipment.
J.-C. Boileau, D. Gottesman, R. Laflamme, D. Poulin, R. W. Spekkens, "Robust Polarization-Based Quantum Key Distribution Over Collective Noise Channel," Phys. Rev. Lett. 92, 17901 (2004). Also quant-ph/0306199.
This paper describes a new protocol for performing quantum key distribution which is designed to address a specific technical problem that arises in some systems. In particular, photons passing through an optical fiber have their polarization rotated by a amount that fluctuates in time. Luckily, the fluctuation is slow enough that sequential photons will have their polarization rotated by the same amount, meaning the problem can be addressed by a type of encoding known as a decoherence-free subspace (DFS). The main contribution of the paper lies in describing a key distribution protocol that uses a DFS but only requires measurements of individual photons and creation of entangled photon pairs (which can be done today).
R. Cleve, D. Gottesman, and H.-K. Lo, "How to Share a Quantum Secret," Phys. Rev. Lett. 83, 648-651 (1999). Also quant-ph/9901025.
This is a short paper on how to hide quantum information by distributing it among several people. It introduces and develops the theory of quantum secret sharing. It is unclear yet whether there will be any important applications of quantum secret sharing, but the theory is interesting in its own right.
D. Gottesman, "On the Theory of Quantum Secret Sharing," Phys. Rev. A 61, 042311 (2000) (8 pages). Also quant-ph/9910067.
This paper builds on the previous quantum secret sharing paper by proving a number of results about quantum secret sharing and classical secret sharing using quantum states. Perhaps the most important result in it is a construction of a quantum secret sharing scheme with an arbitrary access structure satifying the no-cloning theorem.
H. Barnum, C. Crepeau, D. Gottesman, A. Smith, A. Tapp, "Authentication of Quantum Messages," Proc. 43rd IEEE Symposium on the Foundations of Computer Science, 449-458 (2002), DOI: 10.1109/SFCS.2002.1181969. Long version quant-ph/0205128.
Authenticating a message from Alice to Bob means encoding it in some way so that a forger Eve cannot change the message in any way without being detected by Bob. This paper shows how to do this when the message is a quantum state. There is a major difference between classical and quantum authentication: in the quantum case, the message must also be encrypted, which is not at all true classically. This is fairly surprising and indicates a much closer connection between different types of quantum cryptographic protocols than between the related classical protocols. Similarly, there is no way to create a digital signature of unknown quantum state.
C. Crepeau, D. Gottesman, A. Smith, "Secure multi-party quantum computation," Proc. 34th ACM Symposium on the Theory of Computing, 643-652 (New York, NY, ACM Press, 2002), DOI: 10.1145/509907.510000. Also quant-ph/0206138.
This paper shows how to perform general quantum computations distributed among a number of people, even when some of them are dishonest and are trying to sabotage the computation. Each person has an input quantum state, and the goal is to calculate an output state (or states) based on the input. The cheaters may choose their inputs at the start of the protocol, but cannot otherwise influence the outcome. (For instance, they cannot change the value of their inputs in the middle of the computation.) The procedure is fairly similar to a classical multi-party computation protocol, with a few things modified for the quantum setting. For instance, we have to worry about more types of errors that could be introduced by the cheaters.
M. Ben-Or, C. Crepeau, D. Gottesman, A. Hassidim, and A. Smith, "Secure Multiparty Quantum Computation with (Only) a Strict Honest Majority," Proc. 47th IEEE Symposium on the Foundations of Computer Science, 249--260 (2006), DOI: 10.1109/FOCS.2006.68. Also arXiv:0801.1544 [quant-ph].
This paper is an improvement of my earlier work on multiparty quantum computation with Crepeau and Smith, combining it with the approximate quantum error-correcting codes we worked out that tolerate more cheaters. The protocol required a variety of new tricks to make it work, but the main significance is that we originally thought it was impossible.
D. Gottesman, I. Chuang, "Quantum Digital Signatures," quant-ph/0105032.
This article introduces the concept of a quantum public key. It shows how to use a quantum public key to sign messages with unconditional security. The receiver knows the message has not been forged, and can later prove that fact to a judge, if necessary. Classical public keys are easier to distribute than private keys, and quantum public keys retain most of this advantage. However, the scheme presented uses up a lot of key; hopefully future work will be able to eliminate this disadvantage.
D. Gottesman, "Uncloneable Encryption," Proc. 6th International Conf. on Quantum Communication, Measurement, and Computing, eds. J. H. Shapiro and O. Hirota, pp. 405-410 (Princeton, NJ, Rinton Press, 2003). Full version Quantum Information and Computation 3, No. 6, 581-602 (2003). Also quant-ph/0210062.
Quantum states cannot be cloned. This paper shows how to give a similar property to classical messages. The goal of uncloneable encryption is to send classical messages in such a way that an eavesdropper not only cannot read them, but cannot even copy them down to decode later. Remarkably, it turns out that authentication schemes for quantum states have precisely the right properties to be uncloneable encryption schemes when used for classical messages. This establishes a connection between cryptography for quantum states and for classical data, suggesting the two subjects may be more closely related than previously thought.
Essentially all the papers listed in the Quantum Error Correction and Fault Tolerance sections, plus many of the papers listed under Quantum Cryptography use stabilizer codes or states. However, the following papers particularly develop the formalism:
D. Gottesman, "The Heisenberg Representation of Quantum Computers," short version Group22: Proceedings of the XXII International Colloquium on Group Theoretical Methods in Physics, eds. S. P. Corney, R. Delbourgo, and P. D. Jarvis, pp. 32-43 (Cambridge, MA, International Press, 1999). Long version quant-ph/9807006.
I recommend this paper as a starting point to my work if you're not up to my thesis. It gives an introduction to stabilizers in a general context, and hopefully should be fairly easy to read. It also explains how to use stabilizers to solve all sorts of communications problems beyond just quantum error correction. The long version includes additional examples and a discussion of how to handle measurements and operations conditioned on the results.
S. Aaronson, D. Gottesman, "Improved Simulation of Stabilizer Circuits," Phys. Rev. A 70, 052328 (2004) (14 pages). Also quant-ph/0406196.
This paper contains a collection of results relating to the efficient classical simulation of Clifford group circuits (as described, for instance, in my "Heisenberg representation" paper). Probably the most interesting is an improvement in the speed of the simulation from O(n3) to O(n2), which has a big impact on practical implementations of the simulation. Indeed, my co-author Scott has written a program performing such simulations, and some people have even used it to do research.
S. Bravyi, D. Fattal, and D. Gottesman, "GHZ extraction yield for multipartite stabilizer states," J. Math. Phys. 47, 062106 (2006) (19 pages), DOI: 10.1063/1.2203431. Also quant-ph/0504208.
One application of stabilizers beyond error correction is to study entangled states. The class of stabilizer states contains many interesting entangled states, but their structure is simple enough to be potentially tractable, whereas the study of more general entangled states has so far defied definitive analysis. In this paper, we show that for three-party pure entangled stabilizer states, there are only EPR pairs and the GHZ state, and we give some general tools for studying stabilizer entanglement structure. The structure of stabilizer states split among four or more people is going to be more complicated, but there is at least some reasonable hope of being able to understand it.
C. Cormick, E. F. Galvao, D. Gottesman, J. P. Paz, A. O. Pittenger, "Classicality in discrete Wigner functions," Phys. Rev. A 73, 012301 (2006) (9 pages). Also quant-ph/0506222.
In quantum mechanics, there is no way in general to write down a probability for simultaneously having a particular value for two non-commuting observables, like position and momentum. One of the closest tries is the Wigner function, which has many of the features of a probability distribution, except that it can be negative. Not all states have negative values in the Wigner function, however, so we can think of the ones that do not as "classical" states. The usual Wigner function is for continuous variable systems, but there is also a definition of Wigner function for discrete systems, like a group of qubits. In this paper, we study the classical states for the discrete Wigner function (actually for a set of related definitions of the discrete Wigner function, as there is some arbitrariness in the definition), and show that they have a simple structure. Indeed, they are classical in another sense as well: transitions between them can be simulated efficiently on a classical computer.
D. Aharonov, D. Gottesman, S. Irani, J. Kempe, "The power of quantum systems on a line," Proc. 48th IEEE Symposium on the Foundations of Computer Science (FOCS), 373-383 (2007), DOI: 10.1109/FOCS.2007.4389508. Full version Comm. Math. Physics 287, No. 1, 41-65 (2009), DOI: 10.1007/s00220-008-0710-3. Also arXiv:0705.4077 [quant-ph].
In this paper we study the computational complexity of finding the ground state energy of a one-dimensional quantum system. Previous papers had shown that it can be very difficult to find the ground state energy (the lowest possible energy of the system) for systems in two or more dimensions -- specifically, finding the ground state energy is QMA-complete, meaning the problem is at least as difficult as every other problem in the complexity class QMA. QMA is the quantum analogue of NP, and, roughly speaking, is the set of problems where it might be difficult to find the answer, which is a quantum state (the ground state in this case), but given the answer, it is easy to check that it is correct (in this case, that the ground state energy is below some specified value). QMA-complete problems are probably even harder than NP-complete problems, which means that a physical system with one of these difficult Hamiltonians would be unable to relax to its ground state in any reasonable amount of time. A similar construction shows that two-dimensional systems can be used for universal quantum computation via adiabatic evolution, meaning the system is slowly switched between an energy spectrum with a very simple easy-to-create ground state to one in which the ground state gives the answer to some computational problem. However, one-dimensional systems are much more highly constrained, so a lot of people thought they would be much easier. Contrary to this expectation, we showed that one-dimensional systems can still be QMA-complete, and can be used for universal adiabatic quantum computation.
R. Cleve, D. Gottesman, M. Mosca, R.D. Somma, D.L. Yonge-Mallo, "Efficient discrete-time simulations of continuous-time quantum query algorithms," Proc. 41st Ann. Symp. on Theory of Computing, 409-416 (2009), DOI: 10.1145/1536414.1536471. Also arXiv:0811.4428 [quant-ph].
In the field of quantum computation, we frequently think of time as occuring in discrete time steps, and quantum circuits as a series of distinct gates. However, in reality, time is continuous, or at any rate can occur in very tiny intervals. In the quantum circuit model, this corresponds to the existence of very small rotations, but in the standard model, a small rotation and a large rotation cost us the same amount of effort. In this paper, we study how to make small rotations cost the "correct" amount, proportional to the small amount of time needed to perform them. We don't achieve it in general, but do manage to get the cost correct (up to a logarithmic factor) for a constant phase rotation, even if the rotation is accompanied by a more-or-less arbitrary additional controllable Hamiltonian. In particular, we can show that an algorithm built around continuous-time queries to an oracle cannot be substantially more efficient than one using only the discrete version of that oracle.
D. Gottesman, S. Irani, "The Quantum and Classical Complexity of Translationally Invariant Tiling and Hamiltonian Problems," Proc. 50th Annual Symp. on Foundations of Computer Science, 95-104 (2009), DOI: 10.1109/FOCS.2009.22. Full version Theory of Computing 9, article 2, 31-116 (2013), DOI: 10.4086/toc.2013.v009a002. Also arXiv:0905.2419 [quant-ph].
This paper is to some extent a follow-up of our previous paper on QMA-completeness of one-dimensional quantum spin chains. The improvement is that we consider Hamiltonians which are translationally-invariant, meaning the energy term interacting a pair of adjacent particles does not depend on where the particles are. This means that the only input is N, the number of particles in the chain, which makes it hard to prove a complexity-theoretic hardness result: Normally one shows a problem is hard by encoding a class of other problems as instances of the problem you are studying, but in this case, there do not seem to be enough free parameters to encode many different problems. We solve this problem by having the input N provided in binary (so the input is only log N bits long), and then encoding very hard problems to make up for that. The upshot is that we prove that if there is a quantum algorithm for translationally-invariant 1-dimensional Hamiltonian problem which runs in a time polynomial in N, then QMAEXP = BQEXP. This is similar to the statement we can make in the non-translationally-invariant case, but slightly weaker. In other words, probably this problem is too hard to solve efficiently on a quantum computer; it is also probably too hard to even check efficiently on a classical computer. As far as we can tell, the analogous classical result in 2D (showing that 2D tiling with a fixed set of tiles is NEXP-complete) is also new.
S. L. Braunstein, C. A. Fuchs, D. Gottesman, and H.-K. Lo, "A Quantum Analog of Huffman Coding," IEEE Trans. Info. Theory 46, 1644-1649 (2000). Also quant-ph/9805080.
This paper discusses quantum data compression where the codewords have different sizes. There are some interesting complications relating to the difficulty of dealing with a superposition of states of different numbers of qubits. Also, the encoding algorithm provides a significant speedup over the standard block compression algorithm.
R. Blume-Kohout, S. Croke, D. Gottesman, "Streaming universal distortion-free entanglement concentration," IEEE Trans. Info. Theory 60, No. 1, pp. 1-17 (Jan. 2014), DOI: 10.1109/TIT.2013.2292135. Also arXiv:0910.5952 [quant-ph].
Entanglement concentration is a procedure whereby Alice and Bob take a number of partially entangled pure states shared between them and concentrate the entanglement into a smaller number of maximally entangled pairs. In this paper, we develop a better protocol for entanglement concentration which both achieves the optimal asymptotic rate to produce maximally entangled pairs and also outputs the pairs as quickly as possible. A referee accurately described the result as "cute."
D. Beckman, D. Gottesman, M. A. Nielsen, and J. Preskill, "Causal and localizable quantum operations," Phys. Rev. A 64, 052309 (2001) (21 pages). Also quant-ph/0102043.
This paper investigates operations which are causal -- that is, do not allow signalling faster than light. It turns out that quantum systems cannot implement the most general of such operations. Just as quantum mechanics allows correlations stronger than those possible with classical mechanics (as shown by Bell's inequality), there are possible theories that create even stronger correlations without violating special relativity.
D. Gottesman, "Maximally Sensitive Sets of States", arXiv:1907.05950 [quant-ph].
Coherent errors in a quantum system can take advantage of quantum interference to potentially accumulate much faster than errors which behave in a more classical way. This makes coherent errors very dangerous in some situations, and makes it valuable to know if your system has coherent errors so that you can counteract them. In this paper, I give a simple mathematical criterion and describe a simple experimental technique to determine if coherent errors are present in a system and to give full or partial information about their nature.
All the papers I have written related to this topic also fall in other categories listed above. Most of my papers are about abstract theory, and relate to many physical systems. However, the following papers relate to particular potential physical realizations of quantum information processing systems, or are otherwise particularly relevant to implementations:
Some of my work on quantum information has significant applications or relationship to research in other areas. The following papers are particularly notable in this respect:
D. Gottesman, J. Mervis, M. Prentiss, and N.P. Bigelow, "Calculation of enhanced slowing and cooling due to the addition of a traveling wave to an intense optical standing wave," Phys. Rev. A 46, 356-363 (1992).
This is a paper on Sisyphus cooling based on work I did one summer as an undergraduate. The title pretty much summarizes the paper.
D. Gottesman, "Traversable wormholes and black hole complementarity," Phys. Rev. D 51, 4600-4602 (1995). Also hep-th/9404099.
A short paper on the information loss problem. Black hole complementarity is a proposed solution. This paper shows that the solution is incompatible with the existance of traversable wormholes. Of course, either or both could be non-existent.
D. Gottesman, J. Preskill, "Comment on 'The Black Hole Final State,'" JHEP 0403 (2004) 026. Also hep-th/0311269.
This is a comment on a paper by Horowitz and Maldacena which proposes a new idea to resolve the black hole information loss problem. The Horowitz and Maldacena idea is say that at the black hole singularity, the infalling radiation is forced into a particular state through some sort of unspecified quantum gravity effect. This final state condition by itself is non-unitary (i.e., not reversible), but for the correct choice of final state, the overall process is reversible: information about the matter that forms the black hole can, in principle, be reconstructed from the Hawking radiation left after it evaporates. Our observation in the comment is that this is only true for very special choices of final state, and that any interactions inside the black hole before the matter hits the singularity will destroy this effect, returning us in general to a situation where information is lost.
D. Beckman, D. Gottesman, A. Kitaev, J. Preskill, "Measurability of Wilson loop operators," Phys. Rev. D 65, 065022 (2002) (16 pages). Also hep-th/0110205.
This paper shows that in a non-Abelian gauge theory, non-destructive measurement of Wilson loops is impossible. Since Wilson loops are often used to define the observables of such a theory, it presents a puzzle if they can not actually be observed themselves.
M. Fitzi, D. Gottesman, M. Hirt, T. Holenstein, A. Smith, "Detectable Byzantine Agreement Secure Against Faulty Majorities," Proc. 21st ACM Symposium on Principles of Distributed Computing, 118-126 (2002), DOI: 10.1145/571825.571841. Download: ps (315 k)
The Byzantine Agreement problem is how to make general announcements based only on 2-person communication connections. This is difficult when some of the players are dishonest, because if two people disagree on the announcement, it could be because one of them is lying, or because they were actually sent different information. There are standard ways to do this using digital signatures, but how do you get everyone to agree originally what signatures they will accept as correct? This paper shows that the setup phase can be performed successfully, no matter how many of the participants are cheating. The catch is that when there are many cheaters, they can always cause the setup phase to fail; however, the honest players will know it failed, and will therefore only trust the signature infrastructure when it will actually work correctly.
D. Gottesman, "Quantum Statistics with Classical Particles," in Quantum Communication, Measurement and Computing, Proc. 8th International Conference on Quantum Communication, Measurement, and Computing, eds. O. Hirota, J. H. Shapiro, and M. Sasaki, 295-298 (NICT Press, 2007). Full version cond-mat/0511207.
Indistinguishability is conventionally associated closely with quantum mechanics to the extent that truly indistinguishable particles are said to obey "quantum statistics." They can then exhibit unusual collective behaviors like Bose-Einstein condensation, in which large numbers of particles will accumulate in the system's lowest-energy state, even at non-zero temperature. However, it turns out, as I argue in this paper, that the association with quantum mechanics is unnecessary: It is perfectly possible to write down otherwise classical theories that behave this way too, and even to demonstrate them in the laboratory. This contradicts widespread folk wisdom among physicists, but the practical impact is probably limited to how we think about and teach these properties.
D. Gottesman, M. B. Hastings, "Entanglement vs. gap for one-dimensional spin systems," New Journal of Physics 12, 025002 (2010), DOI: 10.1088/1367-2630/12/2/025002. Also arXiv:0901.1108 [quant-ph].
An area law states that the entanglement of a region scales only with the surface area of the region and not the volume. It is believed that the ground state of a gapped local spin system in any dimension always satisfies an area law, but it has only been proven for one dimension, where it says that the entanglement of an interval is a constant, independent of the size of the interval. Intuitively, if a ground state satisfies an area law, it means that entanglement only stretches some finite distance, so all the entanglement for a region with the outside comes only from the vicinity of the boundary. The constant of proportionality between the entanglement and the surface area therefore tells us the range of entanglement. We study the constant for one-dimensional systems, and find that it can be polynomial in the spectral gap (the difference in energy between the ground state and the second-lowest energy level). This contrasts with the systems previously studied, which only had a logarithmic dependence of entanglement with the spectral gap.
D. Gottesman, "Spin systems and computational complexity," Physics in Canada 66, No. 2, 87-89 (2010). Also arXiv:0911.5596 [quant-ph].
This is a short (3 page) introductory paper aimed at undergraduates. It discusses the relationship between spin glasses and NP-complete problems.
D. Gottesman, T. Jennewein, S. Croke, "Longer-Baseline Telescopes Using Quantum Repeaters," Phys. Rev. Lett. 109, 070503 (2012) [5 pages], DOI: 10.1103/PhysRevLett.109.070503. Also arXiv:1107.2939 [quant-ph].
One goal in designing telescopes is to optimize resolution, allowing the telescope to see structures with a small apparent size. The resolution of a telescope with a single large mirror is limited by the size of the mirror. One approach to improving resolution beyond that is to do interference measurements between light arriving at two or more separate telescope dishes. Resolution is then limited by the distance between the telescopes rather than the size of the individual mirrors (although the sensitivity to faint objects is still limited by size, since that determines how much light they collect). For optical wavelengths, it is difficult to do such an interference measurement, since it requires moving the photons captured by the telescopes over large distances. We suggest a way to solve this problem using the technology of quantum repeaters. Quantum repeaters are still in development, but their purpose is to move quantum information over long distances, so they are well-suited to this task.
Back to Daniel Gottesman's home page
Feb. 16, 2024