QuICS Researchers Develop Novel Formal Verification Tools for Quantum Algorithms

Descriptive image for QuICS Researchers Develop Novel Formal Verification Tools for Quantum Algorithms

Rapid developments in quantum computing hardware have brought forth a fundamental question from scientists and others active in the quantum revolution: “How can we be certain that a quantum computer program—when properly executed—will give the right answer?”

In classical (non-quantum) computing, standardized testing methods are often used to accomplish this task, identifying “bugs”—many of which are human-generated—and other anomalies through a formal verification process.

But quantum-based algorithms are much harder to test. The debugging techniques familiar to most programmers from the classical side do not transfer, at scale, to the quantum domain because of quantum’s unique characteristics.

To address this quandary, researchers from the Joint Center for Quantum Information and Computer Science (QuICS) joined other programing experts to explore novel formal verification methods that will perform in the quantum world.

A paper highlighting their work, “A formally certified end-to-end implementation of Schor’s factorization algorithm,” was published earlier this year in the Proceedings of the National Academy of Sciences.

The QuICS team is comprised of Xiaodi Wu (left in photo), an associate professor of computer science and QuICS Fellow, and Yuxiang Peng (right), a fifth-year computer science doctoral student who is advised by Wu.

They collaborated on the paper with Kesha Hietala, Runzhou Tao, Liyi Li, Robert Rand and Michael Hicks. (Li, Rand and Hicks were previously active in QuICS; Li and Rand were Basili Postdoctoral Fellows and Hicks was a QuICS Fellow.)

Peng, who is lead author of the paper, says that adapting formal verification methods developed for classical programs so that they function in the quantum realm is conceptually straightforward, but pragmatically challenging.

He explains that classical program states are, in their simplest terms, “roadmaps” from addresses to 0’s and 1’s—the bits that are central to traditional computing. But quantum states are much more involved, with their complexities resulting in vast multiplications of symbolic formulae that are difficult for most classical proofs to reason with.

Given the potentially large size of quantum states, the researchers needed to develop innovative automated tactics to translate symbolic states into normalized algebraic forms, making them more amenable to automated simplification. They also chose to implement symbolic matrix-based representations for the algebraic expressions.

The automated tactics were based on the use of proof assistants, which is software able to generate formal verification of complex mathematical equations.

Other challenges that arose involved quantum algorithms that were parameterized by oracles, which are classical algorithmic components that must be implemented to run on quantum hardware. Those must be reasoned about, too, the researchers explained, but they can be large—many times larger than an algorithm’s quantum scaffold—and can be challenging to encode for “bug-free” quantum processing.

As the research team developed a series of new protocols and techniques for formal verification, they began testing their effectiveness on recognized quantum algorithms, including one that is inextricably tied to quantum computing.

In 1994, mathematician Peter Shor developed an algorithm revealing how then-hypothetical quantum computers could factor numbers exponentially faster than standard machines. Known as Shor’s algorithm, it is widely seen as having launched the age of quantum computing, while simultaneously exposing the fragility of existing public-key cryptography safeguards that are based on the infeasibility of factoring massive numbers.

“That algorithm has been a fundamental motivation for the development of quantum computers and is at a scale and complexity not reached in prior formal verification methods. We felt it would work well for our theoretical concepts,” says Wu, who also has an appointment in the University of Maryland Institute for Advanced Computer Studies.

Wu adds that the PNAS paper is an extension of work that he, Hicks and Rand published in 2021, earning them and several graduate students in QuICS a Distinguished Paper Award.

That prior research—with Hicks taking a lead role—focused on classical software in the context of quantum computing, Wu says, whereas the new work applies to quantum software (Shor’s algorithm) for factorization. “This is a continued effort to build the foundation for high-assurance implementation of large-scale quantum applications,” he says.

Moving forward, the researchers say that programming quantum computers will continue be challenging, at least in the near term. Qubits—the basic unit of information in quantum computing—will need to scale up seamlessly, and gate pipelines will need to be short to prevent decoherence, which is the disappearance or absence of certain suppositions in quantum states.

Optimizing compilers can help by transforming a source algorithm to work with fewer resources, and where those compilers fall short, programmers can optimize their algorithms by hand.

Both approaches, however, will inevitably have bugs.

“We believe that by developing a framework that is based on formal methods to mathematically certify that quantum programs do what they are meant to, that we can implement a principled approach to mitigating human errors in this critical domain,” says Peng.

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.