# Publications

Recent publications

• Y Lu, JY Sim, J Suzuki, B-G Englert, and HK Ng, Direct estimation of minimum gate fidelity, Phys Rev A 102, 022410 (2020); arXiv:2004.02422.
Abstract:

With the current interest in building quantum computers, there is a strong need for accurate and efficient characterization of the noise in quantum gate implementations. A key measure of the performance of a quantum gate is the minimum gate fidelity, i.e., the fidelity of the gate, minimized over all input states. Conventionally, the minimum fidelity is estimated by first accurately reconstructing the full gate process matrix using the experimental procedure of quantum process tomography (QPT). Then, a numerical minimization is carried out to find the minimum fidelity. QPT is, however, well known to be costly, and it might appear that we can do better, if the goal is only to estimate one single number. In this work, we propose a hybrid numerical-experimental scheme that employs a numerical gradient-free minimization (GFM) and an experimental target-fidelity estimation procedure to directly estimate the minimum fidelity without reconstructing the process matrix. We compare this to an alternative scheme, referred to as QPT fidelity estimation, that does use QPT, but directly employs the minimum gate fidelity as the termination criterion. Both approaches can thus be considered as direct estimation schemes. General theoretical bounds suggest a significant resource savings for the GFM scheme over QPT fidelity estimation; numerical simulations for specific classes of noise, however, show that both schemes have similar performance, reminding us of the need for caution when using general bounds for specific examples. The GFM scheme, however, presents potential for future improvements in resource cost, with the development of even more efficient GFM algorithms.

• DJ Nott, M Seah, L Al-Labadi, M Evans, HK Ng, and B-G Englert, Using prior expansions for prior-data conflict checking, arXiv:1902.10393 (2019) (accepted for publication in Bayesian Analysis).
Abstract:

Any Bayesian analysis involves combining information represented through different model components, and when different sources of information are in conflict it is important to detect this. Here we consider checking for prior-data conflict in Bayesian models by expanding the prior used for the analysis into a larger family of priors, and considering a marginal likelihood score statistic for the expansion parameter. Consideration of different expansions can be informative about the nature of any conflict, and extensions to hierarchically specified priors and connections with other approaches to prior-data conflict checking are discussed. Implementation in complex situations is illustrated with two applications. The first concerns testing for the appropriateness of a LASSO penalty in shrinkage estimation of coefficients in linear regression. Our method is compared with a recent suggestion in the literature designed to be powerful against alternatives in the exponential power family, and we use this family as the prior expansion for constructing our check. A second application concerns a problem in quantum state estimation, where a multinomial model is considered with physical constraints on the model parameters. In this example, the usefulness of different prior expansions is demonstrated for obtaining checks which are sensitive to different aspects of the prior.

• A Jayashankar, AM Babu, HK Ng, and P Mandayam, Finding good codes using the Cartan form, Phys Rev A 101, 042307 (2020); arXiv:1911.02965.
Abstract:

We present a simple and fast numerical procedure to search for good quantum codes for storing logical qubits in the presence of independent per-qubit noise. In a key departure from past work, we use the worst-case fidelity as the figure of merit for quantifying code performance, a much better indicator of code quality than, say, entanglement fidelity. Yet, our algorithm does not suffer from inefficiencies usually associated with the use of worst-case fidelity. Specifically, using a near-optimal recovery map, we are able to reduce the triple numerical optimization needed for the search to a single optimization over the encoding map. We can further reduce the search space using the Cartan decomposition, focusing our search over the nonlocal degrees of freedom resilient against independent per-qubit noise, while not suffering much in code performance.

• JY Sim, J Suzuki, B-G Englert, and HK Ng, User-specified random sampling of quantum channels and its applications, Phys Rev A 101, 022307 (2020); arXiv:1905.00696.
Abstract:

Random samples of quantum channels have many applications in quantum information processing tasks. Due to the Choi–Jamio\l{}kowski isomorphism, there is a well-known correspondence between channels and states, and one can imagine adapting \emph{state} sampling methods to sample quantum channels. Here, we discuss such an adaptation, using the Hamiltonian Monte Carlo method, a well-known classical method capable of producing high quality samples from arbitrary, user-specified distributions. Its implementation requires an exact parameterization of the space of quantum channels, with no superfluous parameters and no constraints. We construct such a parameterization, and demonstrate its use in three common channel sampling applications.

• JY Sim, J Shang, HK Ng, and B-G Englert, Proper error bars for self-calibrating quantum tomography, Phys Rev A 100, 022333 (2019); arXiv:1904.11202.
Abstract:

Self-calibrating quantum state tomography aims at reconstructing the unknown quantum state and certain properties of the measurement devices from the same data. Since the estimates of the state and device parameters come from the same data, one should employ a joint estimation scheme, including the construction and reporting of joint state-device error regions to quantify uncertainty. We explain how to do this naturally within the framework of optimal error regions. As an illustrative example, we apply our procedure to the double-crosshair measurement of the BB84 scenario in quantum cryptography and so reconstruct the state and estimate the detection efficiencies simultaneously and reliably. We also discuss the practical situation of a satellite-based quantum key distribution scheme, for which self-calibration and proper treatment of the data are necessities.

• Y Gazit, HK Ng, and J Suzuki, Quantum process tomography via optimal design of experiments, Phys Rev A 100, 012350 (2019); arXiv:1904.11849.
Abstract:

Quantum process tomography — a primitive in many quantum information processing tasks — can be cast within the framework of the theory of design of experiment (DoE), a branch of classical statistics that deals with the relationship between inputs and outputs of an experimental setup. Such a link potentially gives access to the many ideas of the rich subject of classical DoE for use in quantum problems. The classical techniques from DoE cannot, however, be directly applied to the quantum process tomography due to the basic structural differences between the classical and quantum estimation problems. Here, we properly formulate quantum process tomography as a DoE problem, and examine several examples to illustrate the link and the methods. In particular, we discuss the common issue of nuisance parameters, and point out interesting features in the quantum problem absent in the usual classical setting.

• J Qi and HK Ng, Comparing the randomized benchmarking figure with the average infidelity of a quantum gate-set, Int J Quant Inf 17, 1950031 (2019); arXiv:1805.10622.
Abstract:

Randomized benchmarking (RB) is a popular procedure used to gauge the
performance of a set of gates useful for quantum information processing (QIP).
Recently, Proctor et al. [Phys. Rev. Lett. 119, 130502 (2017)] demonstrated a
practically relevant example where the RB measurements give a number $r$ very
different from the actual average gate-set infidelity $\epsilon$, despite past
theoretical assurances that the two should be equal. Here, we derive formulas
for $\epsilon$, and for $r$ from the RB protocol, in a manner permitting easy
comparison of the two. We show that $r\neq \epsilon$, i.e., RB does not measure
average infidelity, and, in fact, neither one bounds the other. We give several
examples, all plausible in real experiments, to illustrate the differences in
$\epsilon$ and $r$. Many recent papers on experimental implementations of QIP
have claimed the ability to perform high-fidelity gates because they
demonstrated small $r$ values using RB. Our analysis shows that such a
conclusion cannot be drawn from RB alone.

Earlier publications (2013-2018)

Preprints

• Y Gu, R Mishra, B-G Englert, and HK Ng, Randomized linear gate set tomography, arXiv:2010.12235 (2020).
Abstract:

Characterizing the noise in the set of gate operations that form the building blocks of a quantum computational device is a necessity for assessing the quality of the device. Here, we introduce randomized linear gate set tomography, an easy-to-implement gate set tomography procedure that combines the idea of state-preparation-and-measurement-error-free characterization of standard gate set tomography with no-design randomized tomographic circuits and computational ease brought about by an appropriate linear approximation. We demonstrate the performance of our scheme through simulated examples as well as experiments done on the IBM Quantum Experience Platform. In each case, we see that the performance of our procedure is comparable with that of standard gateset tomography, while requiring no complicated tomographic circuit design and taking much less computational time in deducing the estimate of the noise parameters. This allows for straightforward on-the-fly characterization of the gate operations in an experiment.

• J Qi and HK Ng, Randomized benchmarking in the presence of time-correlated dephasing noise, arXiv:2010.11498 (2020).
Abstract:

Randomized benchmarking has emerged as a popular and easy-to-implement experimental technique for gauging the quality of gate operations in quantum computing devices. A typical randomized benchmarking procedure identifies the exponential decay in the fidelity as the benchmarking sequence of gates increases in length, and the decay rate is used to estimate the fidelity of the gate. That the fidelity decays exponentially, however, relies on the assumption of time-independent or static noise in the gates, with no correlations or significant drift in the noise over the gate sequence, a well-satisfied condition in many situations. Deviations from the standard exponential decay, however, have been observed, usually attributed to some amount of time correlations in the noise, though the precise mechanisms for deviation have yet to be fully explored. In this work, we examine this question of randomized benchmarking for time-correlated noise—specifically for time-correlated dephasing noise for exact solvability—and elucidate the circumstances in which a deviation from exponential decay can be expected.

• M Fellous-Asiani, JH Chai, RS Whitney, A Auffèves, and HK Ng, Limitations in quantum computing from resource constraints, arXiv:2007.01966 (2020).
Abstract:

Fault-tolerant quantum computation is the only known route to large-scale, accurate quantum computers. Fault tolerance schemes prescribe how, by investing more physical resources and scaling up the size of the computer, we can keep the computational errors in check and carry out more and more accurate calculations. Underlying all such schemes is the assumption that the error per physical gate is independent of the size of the quantum computer. This, unfortunately, is not reflective of current quantum computing experiments. Here, we examine the general consequences on fault-tolerant quantum computation when constraints on physical resources, such as limited energy input, result in physical error rates that grow as the computer grows. In this case, fault tolerance schemes can no longer reduce computational error to an arbitrarily small number, even if one starts below the so-called fault tolerance noise threshold. Instead, there is a minimum attainable computational error, beyond which further growth of the computer in an attempt to reduce the error becomes counter-productive. We discuss simple, but rather generic, situations in which this effect can arise, and highlight the areas of future developments needed for experiments to overcome this limitation.

• Y Quek, S Fort, and HK Ng, Adaptive Quantum State Tomography with Neural Networks, arXiv:1812.06693 (2018).
Abstract:

Quantum State Tomography is the task of determining an unknown quantum state by making measurements on identical copies of the state. Current algorithms are costly both on the experimental front — requiring vast numbers of measurements — as well as in terms of the computational time to analyze those measurements. In this paper, we address the problem of analysis speed and flexibility, introducing Neural Adaptive Quantum State Tomography (NA-QST), a machine learning based algorithm for quantum state tomography that adapts measurements and provides orders of magnitude faster processing while retaining state-of-the-art reconstruction accuracy. Our algorithm is inspired by particle swarm optimization and Bayesian particle-filter based adaptive methods, which we extend and enhance using neural networks. The resampling step, in which a bank of candidate solutions — particles — is refined, is in our case learned directly from data, removing the computational bottleneck of standard methods. We successfully replace the Bayesian calculation that requires computational time of O(poly(n)) with a learned heuristic whose time complexity empirically scales as O(log(n)) with the number of copies measured n, while retaining the same reconstruction accuracy. This corresponds to a factor of a million speedup for 10^7 copies measured. We demonstrate that our algorithm learns to work with basis, symmetric informationally complete (SIC), as well as other types of POVMs. We discuss the value of measurement adaptivity for each POVM type, demonstrating that its effect is significant only for basis POVMs. Our algorithm can be retrained within hours on a single laptop for a two-qubit situation, which suggests a feasible time-cost when extended to larger systems. It can also adapt to a subset of possible states, a choice of the type of measurement, and other experimental details.

• B-G Englert, M Evans, GH Jang, HK Ng, DJ Nott, and Y-L Seah, Checking the Model and the Prior for the Constrained MultinomialarXiv:1804:06906 (2018).
Abstract:

The multinomial model is one of the simplest statistical models. When constraints are placed on the possible values for the probabilities, however, it becomes much more difficult to deal with. Model checking and checking for prior-data conflict is considered here for such models. A theorem is proved that establishes the consistency of the check on the prior. Applications are presented to models that arise in quantum state estimation as well as the Bayesian analysis of models for ordered probabilities.