Publications

Recent publications

  • M Fellous-Asiani, JH Chai, Y Thonnart, HK Ng, RS Whitney, and A Auffèves, Optimizing resource efficiencies for scalable full-stack quantum computers, PRX Quantum 4, 040319, (2023), arXiv:2209.05469.
    Abstract

    In the race to build scalable quantum computers, minimizing the resource consumption of their full stack to achieve a target performance becomes crucial. It mandates a synergy of fundamental physics and engineering: the former for the microscopic aspects of computing performance and the latter for the macroscopic resource consumption. For this, we propose a holistic methodology dubbed metric noise resource (MNR) that is able to quantify and optimize all aspects of the full-stack quantum computer, bringing together concepts from quantum physics (e.g., noise on the qubits), quantum information (e.g., computing architecture and type of error correction), and enabling technologies (e.g., cryogenics, control electronics, and wiring). This holistic approach allows us to define and study resource efficiencies as ratios between performance and resource cost. As a proof of concept, we use MNR to minimize the power consumption of a full-stack quantum computer, performing noisy or fault-tolerant computing with a target performance for the task of interest. Comparing this with a classical processor performing the same task, we identify a quantum energy advantage in regimes of parameters distinct from the commonly considered quantum computational advantage. This provides a previously overlooked practical argument for building quantum computers. While our illustration uses highly idealized parameters inspired by superconducting qubits with concatenated error correction, the methodology is universal—it applies to other qubits and error-correcting codes—and it provides experimenters with guidelines to build energy-efficient quantum computers. In some regimes of high energy consumption, it can reduce this consumption by orders of magnitude. Overall, our methodology lays the theoretical foundation for resource-efficient quantum technologies.

  • J Qi, X Xu, D Poletti, and HK Ng, Efficacy of noisy dynamical decoupling, Phys Rev A 107, 032615 (2023); arXiv:2209.9039.
    Abstract

    Dynamical decoupling (DD) refers to a well-established family of methods for error mitigation, comprising pulse sequences aimed at averaging away slowly evolving noise in quantum systems. Here we revisit the question of its efficacy in the presence of noisy pulses in scenarios important for quantum devices today: pulses with gate control errors, and the computational setting where DD is used to reduce noise in every computational gate. We focus on the well-known schemes of periodic (or universal) DD and its extension, concatenated DD, for scaling up its power. The qualitative conclusions from our analysis of these two schemes nevertheless apply to other DD approaches. In the presence of noisy pulses, DD does not always mitigate errors. It does so only when the added noise from the imperfect DD pulses does not outweigh the increased ability in averaging away the original background noise. We present break-even conditions that delineate when DD is useful, and further find that there is a limit in the performance of concatenated DD, specifically in how far one can concatenate the DD pulse sequences before the added noise no longer offers any further benefit in error mitigation.

  • A Jayashankar, My DHL, HK Ng, and P Mandayam, Achieving fault tolerance against amplitude-damping noise, Phys Rev Research 4, 023034 (2022); arXiv:2107.05485.
    Abstract

    With the intense interest in small, noisy quantum computing devices comes the push for larger, more accurate — and hence more useful — quantum computers. While fully fault-tolerant quantum computers are, in principle, capable of achieving arbitrarily accurate calculations using devices subjected to general noise, they require immense resources far beyond our current reach. An intermediate step would be to construct quantum computers of limited accuracy enhanced by lower-level, and hence lower-cost, noise-removal techniques. This is the motivation for our work, which looks into fault-tolerant encoded quantum computation targeted at the dominant noise afflicting the quantum device. Specifically, we develop a protocol for fault-tolerant encoded quantum computing components in the presence of amplitude-damping noise, using a 4-qubit code and a recovery procedure tailored to such noise. We describe a universal set of fault-tolerant encoded gadgets and compute the pseudothreshold for the noise, below which our scheme leads to more accurate computation. Our work demonstrates the possibility of applying the ideas of quantum fault tolerance to targeted noise models, generalizing the recent pursuit of biased-noise fault tolerance beyond the usual Pauli noise models. We also illustrate how certain aspects of the standard fault tolerance intuition, largely acquired through Pauli-noise considerations, can fail in the face of more general noise.

  • M Fellous-Asiani, JH Chai, RS Whitney, A Auffèves, and HK Ng, Limitations in quantum computing from resource constraints, PRX Quantum 2, 040335 (2021); arXiv:2007.01966.
    Abstract

    Fault-tolerant quantum computation is the only known route to large-scale, accurate quantum computers. Fault tolerance schemes prescribe how, by investing more physical resources and scaling up the size of the computer, we can keep the computational errors in check and carry out more and more accurate calculations. Underlying all such schemes is the assumption that the error per physical gate is independent of the size of the quantum computer. This, unfortunately, is not reflective of current quantum computing experiments. Here, we examine the general consequences on fault-tolerant quantum computation when constraints on physical resources, such as limited energy input, result in physical error rates that grow as the computer grows. In this case, fault tolerance schemes can no longer reduce computational error to an arbitrarily small number, even if one starts below the so-called fault tolerance noise threshold. Instead, there is a minimum attainable computational error, beyond which further growth of the computer in an attempt to reduce the error becomes counter-productive. We discuss simple, but rather generic, situations in which this effect can arise, and highlight the areas of future developments needed for experiments to overcome this limitation.

  • Y Gu, R Mishra, B-G Englert, and HK Ng, Randomized linear gate set tomography, PRX Quantum 2, 030328 (2021); arXiv:2010.12235.
    Abstract

    Characterizing the noise in the set of gate operations that form the building blocks of a quantum computational device is a necessity for assessing the quality of the device. Here, we introduce randomized linear gate set tomography, an easy-to-implement gate set tomography procedure that combines the idea of state-preparation-and-measurement-error-free characterization of standard gate set tomography with no-design randomized tomographic circuits and computational ease brought about by an appropriate linear approximation. We demonstrate the performance of our scheme through simulated examples as well as experiments done on the IBM Quantum Experience Platform. In each case, we see that the performance of our procedure is comparable with that of standard gateset tomography, while requiring no complicated tomographic circuit design and taking much less computational time in deducing the estimate of the noise parameters. This allows for straightforward on-the-fly characterization of the gate operations in an experiment.

  • Y Quek, S Fort, and HK Ng, Adaptive Quantum State Tomography with Neural Networks, npj Quantum Inf 7, 105 (2021); arXiv:1812.06693.
    Abstract

    Quantum State Tomography is the task of determining an unknown quantum state by making measurements on identical copies of the state. Current algorithms are costly both on the experimental front — requiring vast numbers of measurements — as well as in terms of the computational time to analyze those measurements. In this paper, we address the problem of analysis speed and flexibility, introducing Neural Adaptive Quantum State Tomography (NA-QST), a machine learning based algorithm for quantum state tomography that adapts measurements and provides orders of magnitude faster processing while retaining state-of-the-art reconstruction accuracy. Our algorithm is inspired by particle swarm optimization and Bayesian particle-filter based adaptive methods, which we extend and enhance using neural networks. The resampling step, in which a bank of candidate solutions — particles — is refined, is in our case learned directly from data, removing the computational bottleneck of standard methods. We successfully replace the Bayesian calculation that requires computational time of O(poly(n)) with a learned heuristic whose time complexity empirically scales as O(log(n)) with the number of copies measured n, while retaining the same reconstruction accuracy. This corresponds to a factor of a million speedup for 10^7 copies measured. We demonstrate that our algorithm learns to work with basis, symmetric informationally complete (SIC), as well as other types of POVMs. We discuss the value of measurement adaptivity for each POVM type, demonstrating that its effect is significant only for basis POVMs. Our algorithm can be retrained within hours on a single laptop for a two-qubit situation, which suggests a feasible time-cost when extended to larger systems. It can also adapt to a subset of possible states, a choice of the type of measurement, and other experimental details.

  • B-G Englert, M Evans, GH Jang, HK Ng, DJ Nott, and Y-L Seah, Checking for model failure and for prior-data conflict with the constrained multinomial model, Metrika DOI 10.1007/s00184-021-00811-8 (2021); arXiv:1804:06906.
    Abstract

    The multinomial model is one of the simplest statistical models. When constraints are placed on the possible values for the probabilities, however, it becomes much more difficult to deal with. Model checking and checking for prior-data conflict is considered here for such models. A theorem is proved that establishes the consistency of the check on the prior. Applications are presented to models that arise in quantum state estimation as well as the Bayesian analysis of models for ordered probabilities.

  • J Qi and HK Ng, Randomized benchmarking in the presence of time-correlated dephasing noise, Phys Rev A 103, 022607 (2021); arXiv:2010.11498.
    Abstract

    Randomized benchmarking has emerged as a popular and easy-to-implement experimental technique for gauging the quality of gate operations in quantum computing devices. A typical randomized benchmarking procedure identifies the exponential decay in the fidelity as the benchmarking sequence of gates increases in length, and the decay rate is used to estimate the fidelity of the gate. That the fidelity decays exponentially, however, relies on the assumption of time-independent or static noise in the gates, with no correlations or significant drift in the noise over the gate sequence, a well-satisfied condition in many situations. Deviations from the standard exponential decay, however, have been observed, usually attributed to some amount of time correlations in the noise, though the precise mechanisms for deviation have yet to be fully explored. In this work, we examine this question of randomized benchmarking for time-correlated noise—specifically for time-correlated dephasing noise for exact solvability—and elucidate the circumstances in which a deviation from exponential decay can be expected.

Earlier publications (2013-2020)

Preprints

  • W Li, R Han, J Shang, HK Ng, and B-G Englert, Sequentially constrained Monte Carlo sampler for quantum states, arXiv:2109.14215 (2021).
  • R Han, W Li, S Bagchi, HK Ng, and B-G Englert, Uncorrelated problem-specific samples of quantum states from zero-mean Wishart distributions, arXiv:2106.08533 (2021).