Past publications (2013-2020)

  • Y Lu, JY Sim, J Suzuki, B-G Englert, and HK Ng, Direct estimation of minimum gate fidelity, Phys Rev A 102, 022410 (2020); arXiv:2004.02422.

    With the current interest in building quantum computers, there is a strong need for accurate and efficient characterization of the noise in quantum gate implementations. A key measure of the performance of a quantum gate is the minimum gate fidelity, i.e., the fidelity of the gate, minimized over all input states. Conventionally, the minimum fidelity is estimated by first accurately reconstructing the full gate process matrix using the experimental procedure of quantum process tomography (QPT). Then, a numerical minimization is carried out to find the minimum fidelity. QPT is, however, well known to be costly, and it might appear that we can do better, if the goal is only to estimate one single number. In this work, we propose a hybrid numerical-experimental scheme that employs a numerical gradient-free minimization (GFM) and an experimental target-fidelity estimation procedure to directly estimate the minimum fidelity without reconstructing the process matrix. We compare this to an alternative scheme, referred to as QPT fidelity estimation, that does use QPT, but directly employs the minimum gate fidelity as the termination criterion. Both approaches can thus be considered as direct estimation schemes. General theoretical bounds suggest a significant resource savings for the GFM scheme over QPT fidelity estimation; numerical simulations for specific classes of noise, however, show that both schemes have similar performance, reminding us of the need for caution when using general bounds for specific examples. The GFM scheme, however, presents potential for future improvements in resource cost, with the development of even more efficient GFM algorithms.

  • DJ Nott, M Seah, L Al-Labadi, M Evans, HK Ng, and B-G Englert, Using prior expansions for prior-data conflict checking, Bayesian Anal 16, 203 (2021); arXiv:1902.10393.

    Any Bayesian analysis involves combining information represented through different model components, and when different sources of information are in conflict it is important to detect this. Here we consider checking for prior-data conflict in Bayesian models by expanding the prior used for the analysis into a larger family of priors, and considering a marginal likelihood score statistic for the expansion parameter. Consideration of different expansions can be informative about the nature of any conflict, and extensions to hierarchically specified priors and connections with other approaches to prior-data conflict checking are discussed. Implementation in complex situations is illustrated with two applications. The first concerns testing for the appropriateness of a LASSO penalty in shrinkage estimation of coefficients in linear regression. Our method is compared with a recent suggestion in the literature designed to be powerful against alternatives in the exponential power family, and we use this family as the prior expansion for constructing our check. A second application concerns a problem in quantum state estimation, where a multinomial model is considered with physical constraints on the model parameters. In this example, the usefulness of different prior expansions is demonstrated for obtaining checks which are sensitive to different aspects of the prior.

  • A Jayashankar, AM Babu, HK Ng, and P Mandayam, Finding good codes using the Cartan form, Phys Rev A 101, 042307 (2020); arXiv:1911.02965.

    We present a simple and fast numerical procedure to search for good quantum codes for storing logical qubits in the presence of independent per-qubit noise. In a key departure from past work, we use the worst-case fidelity as the figure of merit for quantifying code performance, a much better indicator of code quality than, say, entanglement fidelity. Yet, our algorithm does not suffer from inefficiencies usually associated with the use of worst-case fidelity. Specifically, using a near-optimal recovery map, we are able to reduce the triple numerical optimization needed for the search to a single optimization over the encoding map. We can further reduce the search space using the Cartan decomposition, focusing our search over the nonlocal degrees of freedom resilient against independent per-qubit noise, while not suffering much in code performance.

  • JY Sim, J Suzuki, B-G Englert, and HK Ng, User-specified random sampling of quantum channels and its applications, Phys Rev A 101, 022307 (2020); arXiv:1905.00696.

    Random samples of quantum channels have many applications in quantum information processing tasks. Due to the Choi–Jamio\l{}kowski isomorphism, there is a well-known correspondence between channels and states, and one can imagine adapting \emph{state} sampling methods to sample quantum channels. Here, we discuss such an adaptation, using the Hamiltonian Monte Carlo method, a well-known classical method capable of producing high quality samples from arbitrary, user-specified distributions. Its implementation requires an exact parameterization of the space of quantum channels, with no superfluous parameters and no constraints. We construct such a parameterization, and demonstrate its use in three common channel sampling applications.

  • JY Sim, J Shang, HK Ng, and B-G Englert, Proper error bars for self-calibrating quantum tomography, Phys Rev A 100, 022333 (2019); arXiv:1904.11202.

    Self-calibrating quantum state tomography aims at reconstructing the unknown quantum state and certain properties of the measurement devices from the same data. Since the estimates of the state and device parameters come from the same data, one should employ a joint estimation scheme, including the construction and reporting of joint state-device error regions to quantify uncertainty. We explain how to do this naturally within the framework of optimal error regions. As an illustrative example, we apply our procedure to the double-crosshair measurement of the BB84 scenario in quantum cryptography and so reconstruct the state and estimate the detection efficiencies simultaneously and reliably. We also discuss the practical situation of a satellite-based quantum key distribution scheme, for which self-calibration and proper treatment of the data are necessities.

  • Y Gazit, HK Ng, and J Suzuki, Quantum process tomography via optimal design of experiments, Phys Rev A 100, 012350 (2019); arXiv:1904.11849.

    Quantum process tomography — a primitive in many quantum information processing tasks — can be cast within the framework of the theory of design of experiment (DoE), a branch of classical statistics that deals with the relationship between inputs and outputs of an experimental setup. Such a link potentially gives access to the many ideas of the rich subject of classical DoE for use in quantum problems. The classical techniques from DoE cannot, however, be directly applied to the quantum process tomography due to the basic structural differences between the classical and quantum estimation problems. Here, we properly formulate quantum process tomography as a DoE problem, and examine several examples to illustrate the link and the methods. In particular, we discuss the common issue of nuisance parameters, and point out interesting features in the quantum problem absent in the usual classical setting.

  • J Qi and HK Ng, Comparing the randomized benchmarking figure with the average infidelity of a quantum gate-set, Int J Quant Inf 17, 1950031 (2019); arXiv:1805.10622.

    Randomized benchmarking (RB) is a popular procedure used to gauge the
    performance of a set of gates useful for quantum information processing (QIP).
    Recently, Proctor et al. [Phys. Rev. Lett. 119, 130502 (2017)] demonstrated a
    practically relevant example where the RB measurements give a number $r$ very
    different from the actual average gate-set infidelity $\epsilon$, despite past
    theoretical assurances that the two should be equal. Here, we derive formulas
    for $\epsilon$, and for $r$ from the RB protocol, in a manner permitting easy
    comparison of the two. We show that $r\neq \epsilon$, i.e., RB does not measure
    average infidelity, and, in fact, neither one bounds the other. We give several
    examples, all plausible in real experiments, to illustrate the differences in
    $\epsilon$ and $r$. Many recent papers on experimental implementations of QIP
    have claimed the ability to perform high-fidelity gates because they
    demonstrated small $r$ values using RB. Our analysis shows that such a
    conclusion cannot be drawn from RB alone.

  • YL Len and  HK Ng, Open-system quantum error correction, Phys Rev A 98, 022307 (2018); arXiv:1804:09486.

    We study the performance of quantum error correction (QEC) on a system undergoing open-system (OS) dynamics. The noise on the system originates from a joint quantum channel on the system-bath composite, a framework that includes and interpolates between the commonly used system-only quantum noise channel model and the system-bath Hamiltonian noise model. We derive the perfect OSQEC conditions, with QEC recovery only on the system and not the inaccessible bath. When the noise is only approximately correctable, the generic case of interest, we quantify the performance of OSQEC using worst-case fidelity. We find that the leading deviation from unit fidelity after recovery is quadratic in the uncorrectable part, a result reminiscent of past work on approximate QEC for system-only noise, although the approach here requires the use of different techniques than in past work.

  • Y Zheng, C-Y Lai, and TA Brun, Efficient Preparation of Large Block Code Ancilla States for Fault-tolerant Quantum Computation, Phys Rev A 97, 032331 (2018); arXiv:1710:00389.

    Fault-tolerant quantum computation (FTQC) schemes that use multi-qubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement of a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data code blocks, which are generally difficult to prepare if the code size is large. Previously we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault-tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O(t^{−2}) to O(1) in practice for an [[n,k,d=2t+1]] CSS code. Ancilla preparation for the [[23,1,7]] quantum Golay code is numerically studied in detail through Monte Carlo simulation. The results support the validity of the protocol when the gate failure rate is reasonably low. To the best of our knowledge, this approach is the first attempt to prepare general large block stabilizer states free of correlated errors for FTQC in a fault-tolerant and efficient manner.

  • YL Len, J Dai, B-G Englert, and LA Krivitsky, Unambiguous path discrimination in a two-path interferometer, Phys Rev A 98, 022110 (2018); arXiv:1708:01408.

    When a photon is detected after passing through an interferometer one might wonder which path it took, and a meaningful answer can only be given if one has the means of monitoring the photon’s whereabouts. We report the realization of a single-photon experiment for a two-path interferometer with path marking. In this experiment, the path of a photon (“signal”) through a Mach–Zehnder interferometer becomes known by unambiguous discrimination between the two paths. We encode the signal path in the polarization state of a partner photon (“idler”) whose polarization is examined by a three-outcome measurement: one outcome each for the two signal paths plus an inconclusive outcome. Our results agree fully with the theoretical predictions from a common-sense analysis of what can be said about the past of a quantum particle: The signals for which we get the inconclusive result have full interference strength, as their paths through the interferometer cannot be known; and every photon that emerges from the dark output port of the balanced interferometer has a known path.

  • Y Zheng and HK Ng, A digital quantum simulator in the presence of a bath, Phys Rev A 96, 042329 (2017); arXiv:1707:04407

    For a digital quantum simulator (DQS) imitating a target system, we ask the following question: Under what conditions is the simulator dynamics similar to that of the target in the presence of coupling to a bath? In this paper, we derive conditions for close simulation for three different physical regimes, replacing previous heuristic arguments on the subject with rigorous statements. In fact, we find that the conventional wisdom that the simulation cycle time should always be short for good simulation need not always hold up. Numerical simulations of two specific examples strengthen the evidence for our analysis, and go beyond to explore broader regimes.

  • M-I Trappe, YL Len, HK Ng, and B-G Englert, Airy-averaged gradient corrections for two-dimensional fermion gases, Ann Phys 385, 136 (2017); arXiv:1612.04048.

    Building on the discussion in PRA 93, 042510 (2016), we present a systematic derivation of gradient corrections to the kinetic-energy functional and the one-particle density, in particular for two-dimensional systems. We derive the leading gradient corrections from a semiclassical expansion based on Wigner’s phase space formalism and demonstrate that the semiclassical kinetic-energy density functional at zero temperature cannot be evaluated unambiguously. In contrast, a density-potential functional description that effectively incorporates interactions provides unambiguous gradient corrections. Employing an averaging procedure that involves Airy functions, thereby partially resumming higher-order gradient corrections, we facilitate a smooth transition of the particle density into the classically forbidden region of arbitrary smooth potentials. We find excellent agreement of the semiclassical Airy-averaged particle densities with the exact densities for very low but finite temperatures, illustrated for a Fermi gas with harmonic potential energy. We furthermore provide criteria for the applicability of the semiclassical expansions at low temperatures. Finally, we derive a well-behaved ground-state kinetic-energy functional, which improves on the Thomas-Fermi approximation.

  • B-G Englert, K Horia, J Dai, YL Len, and HK Ng, Past of a quantum particle revisited, Phys Rev A 96, 022126 (2017); arXiv:1704.03722.

    We analyze Vaidman’s three-path interferometer with weak path marking [Phys. Rev. A 87, 052104 (2013)] and find that common sense yields correct statements about the particle’s path through the interferometer. This disagrees with the original claim that the particles have discontinuous trajectories at odds with common sense. In our analysis, “the particle’s path” has operational meaning as acquired by a path-discriminating measurement. For a quantum-mechanical experimental demonstration of the case, one should perform a single-photon version of the experiment by Danan et al. [Phys. Rev. Lett. 111, 240402 (2013)] with unambiguous path discrimination. We present a detailed proposal for such an experiment.

  • J. Shang, Z. Zhang, and HK Ng, Superfast maximum likelihood reconstruction for quantum tomography, Phys Rev A 95, 062336 (2017); arXiv:1609.07881.

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n-qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  • X Li, J Shang, HK Ng, and B-G Englert, Optimal error intervals for properties of the quantum statePhys Rev A 94, 062112 (2016); arXiv:1602.05780.

    Quantum state estimation aims at determining the quantum state from observed data. Estimating the full state can require considerable efforts, but one is often only interested in a few properties of the state, such as the fidelity with a target state, or the degree of correlation for a specified bipartite structure. Rather than first estimating the state, one can, and should, estimate those quantities of interest directly from the data. We propose the use of optimal error intervals as a meaningful way of stating the accuracy of the estimated property values. Optimal error intervals are analogs of the optimal error regions for state estimation [New J. Phys. 15, 123026 (2013)]. They are optimal in two ways: They have the largest likelihood for the observed data and the prechosen size, and they are the smallest for the prechosen probability of containing the true value. As in the state situation, such optimal error intervals admit a simple description in terms of the marginal likelihood for the data for the properties of interest. Here, we present the concept and construction of optimal error intervals, report on an iterative algorithm for reliable computation of the marginal likelihood (a quantity difficult to calculate reliably), explain how plausible intervals—a notion of evidence provided by the data—are related to our optimal error intervals, and illustrate our methods with single-qubit and two-qubit examples.

  • J Dai, YL Len, HK Ng, Initial system-bath state via the maximum-entropy principlePhys Rev A 94, 052112 (2016); arXiv:1508.06736.

    The initial state of a system-bath composite is needed as the input for prediction from any quantum evolution equation to describe subsequent system-only reduced dynamics or the noise on the system from joint evolution of the system and the bath. The conventional wisdom is to write down an uncorrelated state as if the system and the bath were prepared in the absence of each other; yet, such a factorized state cannot be the exact description in the presence of system-bath interactions. Here, we show how to go beyond the simplistic factorized-state prescription using ideas from quantum tomography: We employ the maximum-entropy principle to deduce an initial system-bath state consistent with the available information. For the generic case of weak interactions, we obtain an explicit formula for the correction to the factorized state. Such a state turns out to have little correlation between the system and the bath, which we can quantify using our formula. This has implications, in particular, on the subject of subsequent non-completely positive dynamics of the system. Deviation from predictions based on such an almost uncorrelated state is indicative of accidental control of hidden degrees of freedom in the bath.

  • M-I Trappe, YL Len, HK Ng, C Müller, and B-G Englert, Leading gradient correction to the kinetic energy for two-dimensional fermion gases, Phys Rev A 93, 042510 (2016); arXiv:1512.07367.

    Density-functional theory (DFT) is notorious for the absence of gradient corrections to the two-dimensional (2D) Thomas-Fermi kinetic-energy functional; it is widely accepted that the 2D analog of the 3D von Weizsäcker correction vanishes, together with all higher-order corrections. Contrary to this long-held belief, we show that the leading correction to the kinetic energy does not vanish, is unambiguous, and contributes perturbatively to the total energy. This insight emerges naturally in a simple extension of standard DFT, which has the effective potential energy as a functional variable on equal footing with the single-particle density.

  • R Han, HK Ng, B-G Englert, Implementing a neutral-atom controlled-phase gate with a single Rydberg pulse, Europhys Lett 113, 40001 (2016); arXiv:1407.8051.

    One can implement fast two-qubit entangling gates by exploiting the Rydberg blockade. Although various theoretical schemes have been proposed, experimenters have not yet been able to demonstrate two-atom gates of high fidelity due to experimental constraints. We propose a novel scheme, which only uses a single Rydberg pulse illuminating both atoms, for the construction of neutral-atom controlled-phase gates. In contrast to the existing schemes, our approach is simpler to implement and requires neither individual addressing of atoms nor adiabatic procedures. With parameters estimated based on actual experimental scenarios, a gate fidelity higher than 0.99 is achievable.

  • J Řeháček, Z Hradil, YS Teo, L Sánchez-Soto, HK Ng, JH Chai, and B-G Englert, Least-bias state estimation with incomplete unbiased measurements, Phys Rev A 92, 052303 (2015); arXiv:1509.07614.

    Measuring incomplete sets of mutually unbiased bases constitutes a sensible approach to the tomography of high-dimensional quantum systems. The unbiased nature of these bases optimizes the uncertainty hypervolume. However, imposing unbiasedness on the probabilities for the unmeasured bases does not generally yield the estimator with the largest von Neumann entropy, a popular figure of merit in this context. Furthermore, this imposition typically leads to mock density matrices that are not even positive definite. This provides a strong argument against perfunctory applications of linear estimation strategies. We propose to use instead the physical state estimators that maximize the Shannon entropy of the unmeasured outcomes, which quantifies our lack of knowledge fittingly and gives physically meaningful statistical predictions.

  • Y-L Seah, J Shang, HK Ng, DJ Nott, and B-G Englert, Monte Carlo sampling in the quantum state space. II, New J Phys 17, 043018 (2015); arXiv:1407.7806.

    High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.

  • J Shang, Y-L Seah, HK Ng, DJ Nott, and B-G Englert, Monte Carlo sampling in the quantum state space. INew J Phys 17, 043017 (2015); arXiv:1407.7805.

    High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the standard strategies of rejection sampling, importance sampling, and Markov-chain sampling can be adapted to this context, where the samples must obey the constraints imposed by the positivity of the statistical operator. For a comparison of these sampling methods, we generate sample points in the probability space for two-qubit states probed with a tomographically incomplete measurement, and then use the sample for the calculation of the size and credibility of the recently-introduced optimal error regions [see New J. Phys. 15 (2013) 123026]. Another illustration is the computation of the fractional volume of separable two-qubit states.

  • V Paulisch, R Han, HK Ng, and B-G Englert, Beyond adiabatic elimination: A hierarchy of approximations for multi-photon processes, Eur Phys J Plus 129, 12 (2014); arXiv:1209.6568.

    In multi-level systems, the commonly used adiabatic elimination is a method for approximating the dynamics of the system by eliminating irrelevant, non-resonantly coupled levels. This procedure is, however, somewhat ambiguous and it is not clear how to improve on it systematically. We use an integro-differential equation for the probability amplitudes of the levels of interest, which is equivalent to the original Schrodinger equation for all probability amplitudes. In conjunction with a Markov approximation, the integro-differential equation is then used to generate a hierarchy of approximations, in which the zeroth order is the adiabatic-elimination approximation. It works well with a proper choice of interaction picture; the procedure suggests criteria for optimizing this choice. The first-order approximation in the hierarchy provides significant improvements over standard adiabatic elimination, without much increase in complexity, and is furthermore not so sensitive to the choice of interaction picture. We illustrate these points with several examples.

  • J Shang, HK Ng, A Sehrawat, X Li, and B-G Englert, Optimal error regions for quantum state estimationNew J Phys 15, 123026 (2013); arXiv:1302.4081.

    Rather than point estimators, states of a quantum system that represent one’s best guess for the given data, we consider optimal regions of estimators. As the natural counterpart of the popular maximum-likelihood point estimator, we introduce the maximum-likelihood region—the region of largest likelihood among all regions of the same size. Here, the size of a region is its prior probability. Another concept is the smallest credible region—the smallest region with pre-chosen posterior probability. For both optimization problems, the optimal region has constant likelihood on its boundary. We discuss criteria for assigning prior probabilities to regions, and illustrate the concepts and methods with several examples.

  • HK Ng and B-G Englert, One-dimensional transport revisited: A simple and exact solution for phase disorderPhys Rev B 88, 054021 (2013); arXiv:1212.1951.

    Disordered systems have grown in importance in the past decades, with similar phenomena manifesting themselves in many different physical systems. Because of the difficulty of the topic, theoretical progress has mostly emerged from numerical studies or analytical approximations. Here, we provide an exact, analytical solution to the problem of uniform phase disorder in a system of identical scatterers arranged with varying separations along a line. Relying on a relationship with Legendre functions, we demonstrate a simple approach to computing statistics of the transmission probability (or the conductance, in the language of electronic transport), and its reciprocal (or the resistance). Our formalism also gives the probability distribution of the conductance, which reveals features missing from previous approaches to the problem.

  • R Han, HK Ng, and B-G Englert, Raman transitions without adiabatic elimination: A simple and accurate treatment, J Mod Opt 60, 255 (2013); arXiv:1209.6569.

    Driven Raman processes — nearly resonant two-photon transitions through an intermediate state that is non-resonantly coupled and does not acquire a sizeable population — are commonly treated with a simplified description in which the intermediate state is removed by adiabatic elimination. While the adiabatic-elimination approximation is reliable when the detuning of the intermediate state is quite large, it cannot be trusted in other situations, and it does not allow one to estimate the population in the eliminated state. We introduce an alternative method that keeps all states in the description, without increasing the complexity by much. An integro-differential equation of Lippmann-Schwinger type generates a hierarchy of approximations, but very accurate results are already obtained in the lowest order.