Recent publications

  • M Fellous-Asiani, JH Chai, RS Whitney, A Auffèves, and HK Ng, Limitations in quantum computing from resource constraints, PRX Quantum 2, 040335 (2021); arXiv:2007.01966.

    Fault-tolerant quantum computation is the only known route to large-scale, accurate quantum computers. Fault tolerance schemes prescribe how, by investing more physical resources and scaling up the size of the computer, we can keep the computational errors in check and carry out more and more accurate calculations. Underlying all such schemes is the assumption that the error per physical gate is independent of the size of the quantum computer. This, unfortunately, is not reflective of current quantum computing experiments. Here, we examine the general consequences on fault-tolerant quantum computation when constraints on physical resources, such as limited energy input, result in physical error rates that grow as the computer grows. In this case, fault tolerance schemes can no longer reduce computational error to an arbitrarily small number, even if one starts below the so-called fault tolerance noise threshold. Instead, there is a minimum attainable computational error, beyond which further growth of the computer in an attempt to reduce the error becomes counter-productive. We discuss simple, but rather generic, situations in which this effect can arise, and highlight the areas of future developments needed for experiments to overcome this limitation.

  • Y Gu, R Mishra, B-G Englert, and HK Ng, Randomized linear gate set tomography, PRX Quantum 2, 030328 (2021); arXiv:2010.12235.

    Characterizing the noise in the set of gate operations that form the building blocks of a quantum computational device is a necessity for assessing the quality of the device. Here, we introduce randomized linear gate set tomography, an easy-to-implement gate set tomography procedure that combines the idea of state-preparation-and-measurement-error-free characterization of standard gate set tomography with no-design randomized tomographic circuits and computational ease brought about by an appropriate linear approximation. We demonstrate the performance of our scheme through simulated examples as well as experiments done on the IBM Quantum Experience Platform. In each case, we see that the performance of our procedure is comparable with that of standard gateset tomography, while requiring no complicated tomographic circuit design and taking much less computational time in deducing the estimate of the noise parameters. This allows for straightforward on-the-fly characterization of the gate operations in an experiment.

  • Y Quek, S Fort, and HK Ng, Adaptive Quantum State Tomography with Neural Networks, npj Quantum Inf 7, 105 (2021); arXiv:1812.06693.

    Quantum State Tomography is the task of determining an unknown quantum state by making measurements on identical copies of the state. Current algorithms are costly both on the experimental front — requiring vast numbers of measurements — as well as in terms of the computational time to analyze those measurements. In this paper, we address the problem of analysis speed and flexibility, introducing Neural Adaptive Quantum State Tomography (NA-QST), a machine learning based algorithm for quantum state tomography that adapts measurements and provides orders of magnitude faster processing while retaining state-of-the-art reconstruction accuracy. Our algorithm is inspired by particle swarm optimization and Bayesian particle-filter based adaptive methods, which we extend and enhance using neural networks. The resampling step, in which a bank of candidate solutions — particles — is refined, is in our case learned directly from data, removing the computational bottleneck of standard methods. We successfully replace the Bayesian calculation that requires computational time of O(poly(n)) with a learned heuristic whose time complexity empirically scales as O(log(n)) with the number of copies measured n, while retaining the same reconstruction accuracy. This corresponds to a factor of a million speedup for 10^7 copies measured. We demonstrate that our algorithm learns to work with basis, symmetric informationally complete (SIC), as well as other types of POVMs. We discuss the value of measurement adaptivity for each POVM type, demonstrating that its effect is significant only for basis POVMs. Our algorithm can be retrained within hours on a single laptop for a two-qubit situation, which suggests a feasible time-cost when extended to larger systems. It can also adapt to a subset of possible states, a choice of the type of measurement, and other experimental details.

  • B-G Englert, M Evans, GH Jang, HK Ng, DJ Nott, and Y-L Seah, Checking for model failure and for prior-data conflict with the constrained multinomial model, Metrika DOI 10.1007/s00184-021-00811-8 (2021); arXiv:1804:06906.

    The multinomial model is one of the simplest statistical models. When constraints are placed on the possible values for the probabilities, however, it becomes much more difficult to deal with. Model checking and checking for prior-data conflict is considered here for such models. A theorem is proved that establishes the consistency of the check on the prior. Applications are presented to models that arise in quantum state estimation as well as the Bayesian analysis of models for ordered probabilities.

  • J Qi and HK Ng, Randomized benchmarking in the presence of time-correlated dephasing noise, Phys Rev A 103, 022607 (2021); arXiv:2010.11498.

    Randomized benchmarking has emerged as a popular and easy-to-implement experimental technique for gauging the quality of gate operations in quantum computing devices. A typical randomized benchmarking procedure identifies the exponential decay in the fidelity as the benchmarking sequence of gates increases in length, and the decay rate is used to estimate the fidelity of the gate. That the fidelity decays exponentially, however, relies on the assumption of time-independent or static noise in the gates, with no correlations or significant drift in the noise over the gate sequence, a well-satisfied condition in many situations. Deviations from the standard exponential decay, however, have been observed, usually attributed to some amount of time correlations in the noise, though the precise mechanisms for deviation have yet to be fully explored. In this work, we examine this question of randomized benchmarking for time-correlated noise—specifically for time-correlated dephasing noise for exact solvability—and elucidate the circumstances in which a deviation from exponential decay can be expected.

  • Y Lu, JY Sim, J Suzuki, B-G Englert, and HK Ng, Direct estimation of minimum gate fidelity, Phys Rev A 102, 022410 (2020); arXiv:2004.02422.

    With the current interest in building quantum computers, there is a strong need for accurate and efficient characterization of the noise in quantum gate implementations. A key measure of the performance of a quantum gate is the minimum gate fidelity, i.e., the fidelity of the gate, minimized over all input states. Conventionally, the minimum fidelity is estimated by first accurately reconstructing the full gate process matrix using the experimental procedure of quantum process tomography (QPT). Then, a numerical minimization is carried out to find the minimum fidelity. QPT is, however, well known to be costly, and it might appear that we can do better, if the goal is only to estimate one single number. In this work, we propose a hybrid numerical-experimental scheme that employs a numerical gradient-free minimization (GFM) and an experimental target-fidelity estimation procedure to directly estimate the minimum fidelity without reconstructing the process matrix. We compare this to an alternative scheme, referred to as QPT fidelity estimation, that does use QPT, but directly employs the minimum gate fidelity as the termination criterion. Both approaches can thus be considered as direct estimation schemes. General theoretical bounds suggest a significant resource savings for the GFM scheme over QPT fidelity estimation; numerical simulations for specific classes of noise, however, show that both schemes have similar performance, reminding us of the need for caution when using general bounds for specific examples. The GFM scheme, however, presents potential for future improvements in resource cost, with the development of even more efficient GFM algorithms.

  • DJ Nott, M Seah, L Al-Labadi, M Evans, HK Ng, and B-G Englert, Using prior expansions for prior-data conflict checking, Bayesian Anal 16, 203 (2021); arXiv:1902.10393.

    Any Bayesian analysis involves combining information represented through different model components, and when different sources of information are in conflict it is important to detect this. Here we consider checking for prior-data conflict in Bayesian models by expanding the prior used for the analysis into a larger family of priors, and considering a marginal likelihood score statistic for the expansion parameter. Consideration of different expansions can be informative about the nature of any conflict, and extensions to hierarchically specified priors and connections with other approaches to prior-data conflict checking are discussed. Implementation in complex situations is illustrated with two applications. The first concerns testing for the appropriateness of a LASSO penalty in shrinkage estimation of coefficients in linear regression. Our method is compared with a recent suggestion in the literature designed to be powerful against alternatives in the exponential power family, and we use this family as the prior expansion for constructing our check. A second application concerns a problem in quantum state estimation, where a multinomial model is considered with physical constraints on the model parameters. In this example, the usefulness of different prior expansions is demonstrated for obtaining checks which are sensitive to different aspects of the prior.

  • A Jayashankar, AM Babu, HK Ng, and P Mandayam, Finding good codes using the Cartan form, Phys Rev A 101, 042307 (2020); arXiv:1911.02965.

    We present a simple and fast numerical procedure to search for good quantum codes for storing logical qubits in the presence of independent per-qubit noise. In a key departure from past work, we use the worst-case fidelity as the figure of merit for quantifying code performance, a much better indicator of code quality than, say, entanglement fidelity. Yet, our algorithm does not suffer from inefficiencies usually associated with the use of worst-case fidelity. Specifically, using a near-optimal recovery map, we are able to reduce the triple numerical optimization needed for the search to a single optimization over the encoding map. We can further reduce the search space using the Cartan decomposition, focusing our search over the nonlocal degrees of freedom resilient against independent per-qubit noise, while not suffering much in code performance.

  • JY Sim, J Suzuki, B-G Englert, and HK Ng, User-specified random sampling of quantum channels and its applications, Phys Rev A 101, 022307 (2020); arXiv:1905.00696.

    Random samples of quantum channels have many applications in quantum information processing tasks. Due to the Choi–Jamio\l{}kowski isomorphism, there is a well-known correspondence between channels and states, and one can imagine adapting \emph{state} sampling methods to sample quantum channels. Here, we discuss such an adaptation, using the Hamiltonian Monte Carlo method, a well-known classical method capable of producing high quality samples from arbitrary, user-specified distributions. Its implementation requires an exact parameterization of the space of quantum channels, with no superfluous parameters and no constraints. We construct such a parameterization, and demonstrate its use in three common channel sampling applications.

Earlier publications (2013-2019)


  • A Jayashankar, My DHL, HK Ng, and P Mandayam, Achieving fault tolerance against amplitude-damping noise, arXiv:2107.05485 (2021).

    With the intense interest in small, noisy quantum computing devices comes the push for larger, more accurate — and hence more useful — quantum computers. While fully fault-tolerant quantum computers are, in principle, capable of achieving arbitrarily accurate calculations using devices subjected to general noise, they require immense resources far beyond our current reach. An intermediate step would be to construct quantum computers of limited accuracy enhanced by lower-level, and hence lower-cost, noise-removal techniques. This is the motivation for our work, which looks into fault-tolerant encoded quantum computation targeted at the dominant noise afflicting the quantum device. Specifically, we develop a protocol for fault-tolerant encoded quantum computing components in the presence of amplitude-damping noise, using a 4-qubit code and a recovery procedure tailored to such noise. We describe a universal set of fault-tolerant encoded gadgets and compute the pseudothreshold for the noise, below which our scheme leads to more accurate computation. Our work demonstrates the possibility of applying the ideas of quantum fault tolerance to targeted noise models, generalizing the recent pursuit of biased-noise fault tolerance beyond the usual Pauli noise models. We also illustrate how certain aspects of the standard fault tolerance intuition, largely acquired through Pauli-noise considerations, can fail in the face of more general noise.