After a Three-Decade Odyssey, Noted Quantum Error Correction Theorist Returns to Maryland

Descriptive image for After a Three-Decade Odyssey, Noted Quantum Error Correction Theorist Returns to Maryland

Daniel Gottesman, one of the newest Fellows of the Joint Center for Quantum Information and Computer Science (QuICS) and the Brin Family Endowed Professor in Theoretical Computer Science at the University of Maryland, is no stranger to the region. He grew up in Montgomery County, just a short drive from the Maryland campus, and both his parents worked as biologists at the National Institutes of Health in Bethesda.

Now, Gottesman is returning to his native Maryland after a career that has taken him across North America and through different areas of quantum information science. In his wake, he has left significant contributions to quantum error correction, quantum cryptography and fault-tolerant quantum computation, including a theorem that bears his name.

“Gottesman is ubiquitous,” says QuICS co-director Yi-Kai Liu. “Anyone who studies quantum error correction has probably read his notes on stabilizer codes. Anyone who studies quantum algorithms has heard of the Gottesman-Knill theorem. And anyone who works on quantum networks has heard of his proposal for interferometric telescopes.”

Gottesman’s career was a long time in the making. By the time he was in third grade, Gottesman knew he wanted to be a scientist. “Both my parents are scientists,” he says, “so it was just kind of like a normal thing to do for me.” A few early encounters with the messy reality of experimental science (including a stint in his mother’s biology lab for a science fair project), set Gottesman on the path toward theory. What most ignited his interest was the desire to have an intellectual perch where he could oversee a range of unsolved problems—and then plan a path to solve them.

Motivated by this pursuit, he left Maryland for the harsher winters of Boston, where he pursued an undergraduate degree in physics at Harvard University. He still made frequent trips back to Maryland, even spending one summer working on a computational project with Sankar Das Sarma, a professor of physics and condensed matter theorist at UMD. But he found simulation almost as unsatisfying as experiment—the best machine he had access to in 1989 was too slow to definitively resolve the problem he was tackling.

Gottesman continued his academic and geographical journey, landing on the West Coast as a graduate student at Caltech in 1992. There, he joined the group of John Preskill, a theoretical physicist, intending to set up camp on the scholarly island of quantum gravity—an ambitious quest to resolve some inconsistencies between the theory of gravity and quantum physics. He was excited by the prospect of uncovering something fundamental about the world, but soon became discouraged by a lack of progress. “It just seemed like, in the end, whatever I did, it wouldn't definitively solve the problem,” Gottesman says.

As luck would have it, Caltech was one of the most exciting places to be during the renaissance of quantum information in the 1990s. Gottesman took his chance to go on a different physics adventure while staying in the same physical location.

The idea of a quantum computing, first proposed by Paul Benioff in the late 1970s and further developed by Richard Feynman in 1982, is to encode information not in normal bits—basic units of classical information that can be either a 1 or a 0—but in quantum mechanical states, or qubits—capable of being 1, 0 or a superposition of the two. Feynman conceived of this as a way to simulate the world in its full quantum complexity, since it was clear to him even in the early 1980s that regular computers were not, and never would be, up to the task. For more than a decade, this idea remained an esoteric curiosity.

This changed abruptly in 1994, when Peter Shor discovered that a quantum computer could be used not only to simulate quantum mechanics but also to solve a problem from traditional computer science: breaking apart large numbers into their prime factors. And it wouldn’t just solve this problem—a quantum computer could factor large numbers much faster than any traditional computer. The factoring problem is particularly relevant, since it is the basis of many cryptographic algorithms used to protect data on the internet, both then and now. Shor’s finding showed that quantum computers could have advantages beyond the academic study of quantum mechanics, sparking the interest of many researchers, including Gottesman.

“By 1996, things really accelerated,” he says. “It seemed like major findings that today we think of as important results were coming out almost every week. It was really an exciting time.” Gottesman had found an academic perch with vistas over a wealth of important questions, and their answers felt within reach.

At the time, one fundamental issue puzzled researchers. The qubits in quantum computers are very susceptible to disturbances from their surroundings, and experimentalists balked at the idea of preserving them long enough to do anything interesting. It was up to theorists to figure out how imperfect qubits could still be used to faithfully store and transmit quantum information.

They had some leads. In classical information theory, one way to protect against errors is to be repetitive—instead of sending a message with a single bit (0, say), send three identical bits (000) so that if one of them gets flipped (100, 010 or 001) the majority vote still signals the correct message.

But this approach is trickier with quantum information. For one, qubits fall prey to more complex errors. Like a classical bit, a qubit can get flipped, but it is also subject to another kind of error, in which a superposition can shift slightly. Second, making identical copies of qubits without knowing exactly what they are is fundamentally impossible, limiting the situations where repetition can be used. Again, Shor offered the key insight. He showed how to encode one “logical qubit” into several physical qubits and diagnose when errors occurred on any single qubit. This launched the field of quantum error correction, and Gottesman got in on the ground floor.

Gottesman found a whole family of such encodings that are particularly handy (independently developed by a team of researchers at AT&T Research and the Institute for Defense Analyses around the same time). These so-called stabilizer codes come with an out-of-the-box prescription for determining what kind of error has occurred. They also make it easy to read off how many logical qubits are encoded, and how many errors would have to occur simultaneously to make a 0 look like an uncorrectable 1.

Later, Gottesman proved that stabilizer codes are extra useful because the mapping from logical qubits to physical qubits could be efficiently calculated on a classical computer—meaning that encoding and decoding these logical qubits is both theoretically and experimentally straightforward, at least relative to more complex codes. Emanuel Knill, then a staff member at Los Alamos National Laboratory (LANL) in New Mexico, had independently noted part of this finding in a footnote of his paper, and so this relationship between stabilizer codes and efficient encodings became known as the Gottesman-Knill theorem.

These days, experimentalists are getting tantalizingly close to encoding a logical qubit in a real quantum computer, and Gottesman’s stabilizer codes are one of their go-to strategies. “It's just so much easier to work with,” says Gottesman. “And it enabled a kind of explosion of new error correcting codes and techniques.”

After finishing his Ph.D., Gottesman moved to the Southwest, joining Knill at LANL as a postdoctoral researcher. While there, he continued exploring the quantum error correction landscape, as well as making some forays into quantum cryptography—the study of exploiting quantum mechanical properties to protect sensitive data. From there, he went on to another postdoctoral position in the theory group at Microsoft Research in Redmond, Washington, followed by a Long-Term Clay Mathematics Institute (CMI) Prize Fellowship at the University of California, Berkeley. In 2002 he joined the faculty at the Perimeter Institute for Theoretical Physics in Waterloo in Canada, a newly formed research center that has since become one of the premier places for theoretical physics in the world.

During his tenure at Perimeter, Gottesman watched as enthusiasm for quantum error correction waned within the community. “There was this kind of dark ages of error correction where almost nobody was working on it,” Gottesman says. Although he branched out into other fields as well, he continued exploring the field of error correction, even if it was no longer fashionable. “Partially, I was trying to keep the flame alive until we could get to the point where it was necessary. I saw things that seemed like holes in the field, and so I tried to fill them.”

One of the gaps Gottesman has worked to bridge is figuring out which encodings help not only with quantum error correction—protecting quantum information stored in memory or sent over a channel—but also quantum fault tolerance—protecting quantum information while it’s being used in a computation. For a quantum computer to work efficiently and correctly, both error correction and fault tolerance need to be there.

A particular sub-family of stabilizer codes, known as surface codes, had emerged as the go-to solution that could do both. But although these surface codes are easy to work with, they require a large number of physical qubits in order to encode one logical qubit. And, as the length of a computation increases, surface codes require more and more physical qubits to tolerate and recover from errors that creep in.

Gottesman started looking at a different sub-family of stabilizer codes, called low-density parity check (LDPC) codes. He proved that, unlike surface codes, LDPC codes do not require more qubits for each logical qubit as the computation gets longer. LDPC codes do take more resources up front than surface codes, (and current experimental realizations are far from being able to benefit from them), but if quantum computers get significantly bigger and are able to work for longer stretches of time, these LDPC codes could become the new workhorse.

Starting in 2010, as quantum computers grew beyond the early implementations, Gottesman noticed another gap in the field within his view. It became clear to him that experimentalists needed a practical way to determine if fault tolerance protocols were helping. “It's an important question to ask, how would you actually show that your quantum computer has fault tolerance,” he says, “and this is a surprisingly difficult question.”

From a theorist’s point of view, turning the noise up and seeing a fault tolerant algorithm compensate for it would be smoking gun evidence that it’s working. However, Gottesman realized that experimentalists don’t necessarily have their fingers on the noise knob. And turning the noise down is hardly an option—current realizations have only recently achieved low enough noise levels to be able to benefit from fault-tolerant protocols.

Gottesman’s solution was quite intuitive. First, run your quantum computer without any fault-tolerant error correction. Then, run the computer with it, and see if it changes anything. His prescription can be applied to any of the quantum computers being built today.

With larger quantum computers now on the horizon, Gottesman has also developed connections with the world of industry, serving as a Senior Scientist at Quantum Benchmark Inc.—a company developing software to evaluate and optimize the performance of quantum computers. He plans to maintain this connection and keep his finger on the pulse of industry developments as he transitions to QuICS.

Having returned to Maryland, Gottesman says he plans to continue making progress on quantum error correction and fault tolerance, but he also wants to return to an idea he had a decade ago about using quantum information principles to improve telescope performance. However, he doesn’t want to constrain his future research direction too much.

“The reality is that probably a lot of my lot of my most important work has been serendipity driven,” Gottesman says. “And so I don't feel like I can really predict what I'm going to do, because, you know, I don't want to shut off this serendipity.”

 

—Story by Dina Genkina

 

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.