**By Bill Fefferman, University of Maryland, US.**

One of the most active areas of research in quantum information is “quantum supremacy”. The central goal is to exhibit a provable quantum speedup over classical computation using the restrictive resources of existing or near-term quantum experiments. Starting with work of Bremner, Jozsa, and Shepherd it has been established that even non-universal commuting classes of quantum computations known as “Instantaneous Quantum Polynomial-time” (or IQP) circuits are capable of sampling from distributions that cannot be exactly sampled classically, under mild complexity assumptions.

As quantum supremacy leaps from the domain of theory to experiment, the major open questions involve understanding which intermediate models are capable of demonstrating supremacy, and how to account for experimentally realistic noise in these models. As a starting point, a primary objective has been to strengthen these sampling separations to hold even if the classical sampler is not required to sample exactly from the quantum outcome distribution but instead samples from a distribution close in total variation distance.

Follow-up work of Bremner, Montanaro and Shepherd addressed this stronger scenario. They show that the outcome distribution of IQP circuits cannot be approximately sampled classically, assuming a conjecture concerning the hardness of estimating the complex-temperature partition function of certain random instances of the Ising model. Understanding the feasibility and implications of this conjecture, as well as related conjectures of Aaronson and Arkhipov regarding estimating the permanent of random matrices, has since become one of the most important challenges in quantum complexity theory.

While not settling these conjectures, the present paper makes important contributions to our understanding of approximate sampling hardness results. The first main result extends the prior work to give similar conjectural evidence that approximate sampling from the output distribution of so-called “sparse” IQP circuits is classically hard. A randomly chosen “sparse” IQP circuit on n qubits consists of *O(n**log*(n) )* 2-qubit gates and can be implemented on a square lattice with a shallow depth circuit, with high probability. These structural properties reduce the experimental barrier of implementing such supremacy results and bring the proposal closer to the realm of currently implementable experiments.

The second result studies IQP sampling under a natural noise model, in which independent depolarizing noise is applied to every qubit at the end of the circuit. It is proven that the resulting output distribution becomes easy to sample from classically, illuminating the fragile nature of supremacy results. However, the authors develop new means for protecting against this depolarizing noise model, using ideas from classical error-correcting codes. Crucially, this result is achieved without the full overhead required by traditional quantum fault tolerance. It is very likely that these error-correcting tools will find use in future results on noise-tolerant quantum supremacy proposals.

Taken as a whole, these results give a fresh perspective to some of the most foundational questions in quantum computation—where quantum speedups come from, and under which experimentally motivated settings we can hope to observe these speedups. As such, this work offers a solid contribution to the quantum supremacy literature, and develops useful tools for analyzing candidates for future supremacy experiments.

]]>Self-testing allows classical referees to verify the quantum behaviour of some untrusted devices. Recently we developed a framework for building large self-tests by repeating a smaller self-test many times in parallel. However, the framework did not apply to the CHSH test, which tests a maximally entangled pair of qubits. CHSH is the most well known and widely used test of this type. Here we extend the parallel self-testing framework to build parallel CHSH self-tests for any number of pairs of maximally entangled qubits. Our construction achieves an error bound which is polynomial in the number of tested qubit pairs.

]]>Self-testing allows classical referees to verify the quantum behaviour of some untrusted devices. Recently we developed a framework for building large self-tests by repeating a smaller self-test many times in parallel. However, the framework did not apply to the CHSH test, which tests a maximally entangled pair of qubits. CHSH is the most well known and widely used test of this type. Here we extend the parallel self-testing framework to build parallel CHSH self-tests for any number of pairs of maximally entangled qubits. Our construction achieves an error bound which is polynomial in the number of tested qubit pairs.

]]>The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.

]]>The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.

]]>We consider the problem of reproducing the correlations obtained by arbitrary local projective measurements on the two-qubit Werner state $\rho = v |\psi_- \rangle \langle\psi_- | + (1- v ) \frac{1}{4}$ via a local hidden variable (LHV) model, where $|\psi_- \rangle$ denotes the singlet state. We show analytically that these correlations are local for $ v = 999\times689\times{10^{-6}}$ $\cos^2(\pi/50) \simeq 0.6829$. In turn, as this problem is closely related to a purely mathematical one formulated by Grothendieck, our result implies a new bound on the Grothendieck constant $K_G(3) \leq 1/v \simeq 1.4644$. We also present a LHV model for reproducing the statistics of arbitrary POVMs on the Werner state for $v \simeq 0.4553$. The techniques we develop can be adapted to construct LHV models for other entangled states, as well as bounding other Grothendieck constants.

]]>We consider the problem of reproducing the correlations obtained by arbitrary local projective measurements on the two-qubit Werner state $\rho = v |\psi_- \rangle \langle\psi_- | + (1- v ) \frac{1}{4}$ via a local hidden variable (LHV) model, where $|\psi_- \rangle$ denotes the singlet state. We show analytically that these correlations are local for $ v = 999\times689\times{10^{-6}}$ $\cos^2(\pi/50) \simeq 0.6829$. In turn, as this problem is closely related to a purely mathematical one formulated by Grothendieck, our result implies a new bound on the Grothendieck constant $K_G(3) \leq 1/v \simeq 1.4644$. We also present a LHV model for reproducing the statistics of arbitrary POVMs on the Werner state for $v \simeq 0.4553$. The techniques we develop can be adapted to construct LHV models for other entangled states, as well as bounding other Grothendieck constants.

]]>We study the fundamental limits on the reliable storage of quantum information in lattices of qubits by deriving tradeoff bounds for approximate quantum error correcting codes. We introduce a notion of local approximate correctability and code distance, and give a number of equivalent formulations thereof, generalizing various exact error-correction criteria. Our tradeoff bounds relate the number of physical qubits $n$, the number of encoded qubits $k$, the code distance $d$, the accuracy parameter $\delta$ that quantifies how well the erasure channel can be reversed, and the locality parameter $\ell$ that specifies the length scale at which the recovery operation can be done. In a regime where the recovery is successful to accuracy $\epsilon$ that is exponentially small in $\ell$, which is the case for perturbations of local commuting projector codes, our bound reads $kd^{\frac{2}{D-1}} \le O\bigl(n (\log n)^{\frac{2D}{D-1}} \bigr)$ for codes on $D$-dimensional lattices of Euclidean metric. We also find that the code distance of any local approximate code cannot exceed $O\bigl(\ell n^{(D-1)/D}\bigr)$ if $\delta \le O(\ell n^{-1/D})$. As a corollary of our formulation of correctability in terms of logical operator avoidance, we show that the code distance $d$ and the size $\tilde d$ of a minimal region that can support all approximate logical operators satisfies $\tilde d d^{\frac{1}{D-1}}\le O\bigl( n \ell^{\frac{D}{D-1}} \bigr)$, where the logical operators are accurate up to $O\bigl( ( n \delta / d )^{1/2}\bigr)$ in operator norm. Finally, we prove that for two-dimensional systems if logical operators can be approximated by operators supported on constant-width flexible strings, then the dimension of the code space must be bounded. This supports one of the assumptions of algebraic anyon theories, that there exist only finitely many anyon types.

]]>We study the fundamental limits on the reliable storage of quantum information in lattices of qubits by deriving tradeoff bounds for approximate quantum error correcting codes. We introduce a notion of local approximate correctability and code distance, and give a number of equivalent formulations thereof, generalizing various exact error-correction criteria. Our tradeoff bounds relate the number of physical qubits $n$, the number of encoded qubits $k$, the code distance $d$, the accuracy parameter $\delta$ that quantifies how well the erasure channel can be reversed, and the locality parameter $\ell$ that specifies the length scale at which the recovery operation can be done. In a regime where the recovery is successful to accuracy $\epsilon$ that is exponentially small in $\ell$, which is the case for perturbations of local commuting projector codes, our bound reads $kd^{\frac{2}{D-1}} \le O\bigl(n (\log n)^{\frac{2D}{D-1}} \bigr)$ for codes on $D$-dimensional lattices of Euclidean metric. We also find that the code distance of any local approximate code cannot exceed $O\bigl(\ell n^{(D-1)/D}\bigr)$ if $\delta \le O(\ell n^{-1/D})$. As a corollary of our formulation of correctability in terms of logical operator avoidance, we show that the code distance $d$ and the size $\tilde d$ of a minimal region that can support all approximate logical operators satisfies $\tilde d d^{\frac{1}{D-1}}\le O\bigl( n \ell^{\frac{D}{D-1}} \bigr)$, where the logical operators are accurate up to $O\bigl( ( n \delta / d )^{1/2}\bigr)$ in operator norm. Finally, we prove that for two-dimensional systems if logical operators can be approximated by operators supported on constant-width flexible strings, then the dimension of the code space must be bounded. This supports one of the assumptions of algebraic anyon theories, that there exist only finitely many anyon types.

]]>Characterizing quantum systems through experimental data is critical to applications as diverse as metrology and quantum computing. Analyzing this experimental data in a robust and reproducible manner is made challenging, however, by the lack of readily-available software for performing principled statistical analysis. We improve the robustness and reproducibility of characterization by introducing an open-source library, QInfer, to address this need. Our library makes it easy to analyze data from tomography, randomized benchmarking, and Hamiltonian learning experiments either in post-processing, or online as data is acquired. QInfer also provides functionality for predicting the performance of proposed experimental protocols from simulated runs. By delivering easy-to-use characterization tools based on principled statistical analysis, QInfer helps address many outstanding challenges facing quantum technology.

]]>Characterizing quantum systems through experimental data is critical to applications as diverse as metrology and quantum computing. Analyzing this experimental data in a robust and reproducible manner is made challenging, however, by the lack of readily-available software for performing principled statistical analysis. We improve the robustness and reproducibility of characterization by introducing an open-source library, QInfer, to address this need. Our library makes it easy to analyze data from tomography, randomized benchmarking, and Hamiltonian learning experiments either in post-processing, or online as data is acquired. QInfer also provides functionality for predicting the performance of proposed experimental protocols from simulated runs. By delivering easy-to-use characterization tools based on principled statistical analysis, QInfer helps address many outstanding challenges facing quantum technology.

]]>We apply classical algorithms for approximately solving constraint satisfaction problems to find bounds on extremal eigenvalues of local Hamiltonians. We consider spin Hamiltonians for which we have an upper bound on the number of terms in which each spin participates, and find extensive bounds for the operator norm and ground-state energy of such Hamiltonians under this constraint. In each case the bound is achieved by a product state which can be found efficiently using a classical algorithm.

]]>We apply classical algorithms for approximately solving constraint satisfaction problems to find bounds on extremal eigenvalues of local Hamiltonians. We consider spin Hamiltonians for which we have an upper bound on the number of terms in which each spin participates, and find extensive bounds for the operator norm and ground-state energy of such Hamiltonians under this constraint. In each case the bound is achieved by a product state which can be found efficiently using a classical algorithm.

]]>We give a complete proposal showing how to detect the non-classical nature of photonic states with naked eyes as detectors. The enabling technology is a sub-Poissonian photonic state that is obtained from single photons, displacement operations in phase space and basic non-photon-number-resolving detectors. We present a detailed statistical analysis of our proposal including imperfect photon creation and detection and a realistic model of the human eye. We conclude that a few tens of hours are sufficient to certify non-classical light with the human eye with a p-value of 10%.

]]>We give a complete proposal showing how to detect the non-classical nature of photonic states with naked eyes as detectors. The enabling technology is a sub-Poissonian photonic state that is obtained from single photons, displacement operations in phase space and basic non-photon-number-resolving detectors. We present a detailed statistical analysis of our proposal including imperfect photon creation and detection and a realistic model of the human eye. We conclude that a few tens of hours are sufficient to certify non-classical light with the human eye with a p-value of 10%.

]]>The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.

]]>The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.

]]>We investigate the finite-density phase diagram of a non-abelian $SU(2)$ lattice gauge theory in $(1+1)$-dimensions using tensor network methods. We numerically characterise the phase diagram as a function of the matter filling and of the matter-field coupling, identifying different phases, some of them appearing only at finite densities. For weak matter-field coupling we find a meson BCS liquid phase, which is confirmed by second-order analytical perturbation theory. At unit filling and for strong coupling, the system undergoes a phase transition to a charge density wave of single-site (spin-0) mesons via spontaneous chiral symmetry breaking. At finite densities, the chiral symmetry is restored almost everywhere, and the meson BCS liquid becomes a simple liquid at strong couplings, with the exception of filling two-thirds, where a charge density wave of mesons spreading over neighbouring sites appears. Finally, we identify two tri-critical points between the chiral and the two liquid phases which are compatible with a $SU(2)_2$ Wess-Zumino-Novikov-Witten model. Here we do not perform the continuum limit but we explicitly address the global $U(1)$ charge conservation symmetry.

]]>We investigate the finite-density phase diagram of a non-abelian $SU(2)$ lattice gauge theory in $(1+1)$-dimensions using tensor network methods. We numerically characterise the phase diagram as a function of the matter filling and of the matter-field coupling, identifying different phases, some of them appearing only at finite densities. For weak matter-field coupling we find a meson BCS liquid phase, which is confirmed by second-order analytical perturbation theory. At unit filling and for strong coupling, the system undergoes a phase transition to a charge density wave of single-site (spin-0) mesons via spontaneous chiral symmetry breaking. At finite densities, the chiral symmetry is restored almost everywhere, and the meson BCS liquid becomes a simple liquid at strong couplings, with the exception of filling two-thirds, where a charge density wave of mesons spreading over neighbouring sites appears. Finally, we identify two tri-critical points between the chiral and the two liquid phases which are compatible with a $SU(2)_2$ Wess-Zumino-Novikov-Witten model. Here we do not perform the continuum limit but we explicitly address the global $U(1)$ charge conservation symmetry.

]]>To study which are the most general causal structures which are compatible with local quantum mechanics, Oreshkov et al. introduced the notion of a process: a resource shared between some parties that allows for quantum communication between them without a predetermined causal order. These processes can be used to perform several tasks that are impossible in standard quantum mechanics: they allow for the violation of causal inequalities, and provide an advantage for computational and communication complexity. Nonetheless, no process that can be used to violate a causal inequality is known to be physically implementable. There is therefore considerable interest in determining which processes are physical and which are just mathematical artefacts of the framework. Here we make the first step in this direction, by proposing a purification postulate: processes are physical only if they are purifiable. We derive necessary conditions for a process to be purifiable, and show that several known processes do not satisfy them.

]]>To study which are the most general causal structures which are compatible with local quantum mechanics, Oreshkov et al. introduced the notion of a process: a resource shared between some parties that allows for quantum communication between them without a predetermined causal order. These processes can be used to perform several tasks that are impossible in standard quantum mechanics: they allow for the violation of causal inequalities, and provide an advantage for computational and communication complexity. Nonetheless, no process that can be used to violate a causal inequality is known to be physically implementable. There is therefore considerable interest in determining which processes are physical and which are just mathematical artefacts of the framework. Here we make the first step in this direction, by proposing a purification postulate: processes are physical only if they are purifiable. We derive necessary conditions for a process to be purifiable, and show that several known processes do not satisfy them.

]]>