We give a new upper bound on the quantum query complexity of deciding $st$-connectivity on certain classes of planar graphs, and show the bound is sometimes exponentially better than previous results. We then show Boolean formula evaluation reduces to deciding connectivity on just such a class of graphs. Applying the algorithm for $st$-connectivity to Boolean formula evaluation problems, we match the $O(\sqrt{N})$ bound on the quantum query complexity of evaluating formulas on $N$ variables, give a quadratic speed-up over the classical query complexity of a certain class of promise Boolean formulas, and show this approach can yield superpolynomial quantum/classical separations. These results indicate that this $st$-connectivity-based approach may be the "right" way of looking at quantum algorithms for formula evaluation.

Quantum 1, 26 (2017). https://doi.org/10.22331/q-2017-08-17-26

]]>We give a new upper bound on the quantum query complexity of deciding $st$-connectivity on certain classes of planar graphs, and show the bound is sometimes exponentially better than previous results. We then show Boolean formula evaluation reduces to deciding connectivity on just such a class of graphs. Applying the algorithm for $st$-connectivity to Boolean formula evaluation problems, we match the $O(\sqrt{N})$ bound on the quantum query complexity of evaluating formulas on $N$ variables, give a quadratic speed-up over the classical query complexity of a certain class of promise Boolean formulas, and show this approach can yield superpolynomial quantum/classical separations. These results indicate that this $st$-connectivity-based approach may be the "right" way of looking at quantum algorithms for formula evaluation.

]]>The minimal memory required to model a given stochastic process - known as the statistical complexity - is a widely adopted quantifier of structure in complexity science. Here, we ask if quantum mechanics can fundamentally change the qualitative behaviour of this measure. We study this question in the context of the classical Ising spin chain. In this system, the statistical complexity is known to grow monotonically with temperature. We evaluate the spin chain's quantum mechanical statistical complexity by explicitly constructing its provably simplest quantum model, and demonstrate that this measure exhibits drastically different behaviour: it rises to a maximum at some finite temperature then tends back towards zero for higher temperatures. This demonstrates how complexity, as captured by the amount of memory required to model a process, can exhibit radically different behaviour when quantum processing is allowed.

Quantum 1, 25 (2017). https://doi.org/10.22331/q-2017-08-11-25

]]>The minimal memory required to model a given stochastic process - known as the statistical complexity - is a widely adopted quantifier of structure in complexity science. Here, we ask if quantum mechanics can fundamentally change the qualitative behaviour of this measure. We study this question in the context of the classical Ising spin chain. In this system, the statistical complexity is known to grow monotonically with temperature. We evaluate the spin chain's quantum mechanical statistical complexity by explicitly constructing its provably simplest quantum model, and demonstrate that this measure exhibits drastically different behaviour: it rises to a maximum at some finite temperature then tends back towards zero for higher temperatures. This demonstrates how complexity, as captured by the amount of memory required to model a process, can exhibit radically different behaviour when quantum processing is allowed.

]]>The production of quantum states required for use in quantum protocols & technologies is studied by developing the tools to re-engineer a perfect state transfer spin chain so that a separable input excitation is output over multiple sites. We concentrate in particular on cases where the excitation is superposed over a small subset of the qubits on the spin chain, known as fractional revivals, demonstrating that spin chains are capable of producing a far greater range of fractional revivals than previously known, at high speed. We also provide a numerical technique for generating chains that produce arbitrary single-excitation states, such as the W state.

Quantum 1, 24 (2017). https://doi.org/10.22331/q-2017-08-10-24

]]>The production of quantum states required for use in quantum protocols & technologies is studied by developing the tools to re-engineer a perfect state transfer spin chain so that a separable input excitation is output over multiple sites. We concentrate in particular on cases where the excitation is superposed over a small subset of the qubits on the spin chain, known as fractional revivals, demonstrating that spin chains are capable of producing a far greater range of fractional revivals than previously known, at high speed. We also provide a numerical technique for generating chains that produce arbitrary single-excitation states, such as the W state.

]]>*This is a Perspective on "Causal hierarchy of multipartite Bell nonlocality" by Rafael Chaves, Daniel Cavalcanti, and Leandro Aolita, published in Quantum 1, 23 (2017).*

Quantum Views 1, 3 (2017).

https://doi.org/10.22331/qv-2017-08-04-3

**By Paul Skrzypczyk, School of Physics, University of Bristol, UK.**

The results of measurements performed locally on entangled quantum systems shared among multiple parties can be correlated in ways which are inexplicable by any classical mechanism. This phenomenon, known as Bell nonlocality, is a fundamental and fascinating aspect of quantum theory [1].

What is meant by classically inexplicable? It means that there is no classical model (often called a local hidden variable model) with the same underlying *causal structure* that can reproduce the quantum predictions. The causal structure is the implicit geometry of the setup – for example the fact that each party’s local measurement result is independent of the other parties’ measurement choices (known as no-signaling), or that the measurement choices are themselves independent of everything else (known as measurement independence).

If we consider trying to reproduce quantum predictions using classical mechanisms with *relaxed causal structure*, then indeed they can often do so. For example, if there is communication among all the parties, then this classical mechanism can readily reproduce any quantum correlation, nonlocal or not. What is remarkable is that some seemingly powerful causal relaxations still cannot reproduce all of the predictions of quantum theory. For example, if the communication is restricted to all but one of the parties, then this is not able to reproduce everything that can be achieved by making local measurements on multipartite entangled states [2]. The nonlocality of such ‘genuine multipartite nonlocal’ correlations is therefore shown to be very strong, as highlighted by the difficultly of classically reproducing them, even given much more freedom.

One barrier to having a systematic study of causal relaxations is that the number of relaxations grows exponentially in the number of parties. At least it seemed to naively. The main result of the work of Chaves, Cavalcanti and Aolita [3] is to identify that large classes of causal relaxations are in fact equivalent to each other, as far as the non-signaling correlations they can produce are concerned. What is really of interest then is the number of different inequivalent classes of causal relaxations, which they show is much smaller, and has a natural hierarchical structure, depending on the total number of relaxations introduced.

Focusing on the tripartite scenario, they demonstrate the power of their result by showing that there are in fact only 8 different inequivalent classes of causal relaxations which are interesting – ones which are not powerful enough to reproduce all non-signaling correlations. Previous results had shown that quantum correlations are inexplicable by 6 of these classes [4]. Of the remaining two classes, the first, which sits at the top of the hierarchy, and is termed the *star* (since one party receives communication from all others), is shown, somewhat astonishingly, not to be able to reproduce all quantum nonlocal correlations, despite being the strongest possible causal relaxation. The second class, termed the *circle* (since each party communicates to their neighbour in a circular fashion), is left as the intriguing class – known neither to reproduce all quantum correlations nor whether it falls short.

The true power of the results comes in the unifying and simplified view they provide for studying relaxed causal structures. What was previously a vast forest is now a well organised playground, ready to be explored, and played in, as we continue to push forward our understanding of quantum non-locality.

[1] See, e.g. for a comprehensive review N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani and S. Wehner, Bell nonlocality, Rev. Mod. Phys. 86, 419 (2014).

https://doi.org/10.1103/RevModPhys.86.419

[2] G. Svetlichny, Distinguishing three-body from two-body nonseparability by a Bell-type inequality, Phys. Rev. D 35, 3066 (1987).

https://doi.org/10.1103/PhysRevD.35.3066

[3] R. Chaves, D. Cavalcanti and L. Aolita, Causal hierarchy of multipartite Bell nonlocality, Quantum 1, 23 (2017).

https://doi.org/10.22331/q-2017-08-04-23

[4] N. S. Jones, N. Linden and S. Massar, Extent of multiparticle quantum nonlocality, Phys. Rev. A 71, 042329 (2005).

https://doi.org/10.1103/PhysRevA.71.042329

This perspective is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.

]]>As with entanglement, different forms of Bell nonlocality arise in the multipartite scenario. These can be defined in terms of relaxations of the causal assumptions in local hidden-variable theories. However, a characterisation of all the forms of multipartite nonlocality has until now been out of reach, mainly due to the complexity of generic multipartite causal models. Here, we employ the formalism of Bayesian networks to reveal connections among different causal structures that make a both practical and physically meaningful classification possible. Our framework holds for arbitrarily many parties. We apply it to study the tripartite scenario in detail, where we fully characterize all the nonlocality classes. Remarkably, we identify new highly nonlocal causal structures that cannot reproduce all quantum correlations. This shows, to our knowledge, the strongest form of quantum multipartite nonlocality known to date. Finally, as a by-product result, we derive a non-trivial Bell-type inequality with no quantum violation. Our findings constitute a significant step forward in the understanding of multipartite Bell nonlocality and open several venues for future research.

Quantum 1, 23 (2017). https://doi.org/10.22331/q-2017-08-04-23

]]>As with entanglement, different forms of Bell nonlocality arise in the multipartite scenario. These can be defined in terms of relaxations of the causal assumptions in local hidden-variable theories. However, a characterisation of all the forms of multipartite nonlocality has until now been out of reach, mainly due to the complexity of generic multipartite causal models. Here, we employ the formalism of Bayesian networks to reveal connections among different causal structures that make a both practical and physically meaningful classification possible. Our framework holds for arbitrarily many parties. We apply it to study the tripartite scenario in detail, where we fully characterize all the nonlocality classes. Remarkably, we identify new highly nonlocal causal structures that cannot reproduce all quantum correlations. This shows, to our knowledge, the strongest form of quantum multipartite nonlocality known to date. Finally, as a by-product result, we derive a non-trivial Bell-type inequality with no quantum violation. Our findings constitute a significant step forward in the understanding of multipartite Bell nonlocality and open several venues for future research.

]]>We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

Quantum 1, 22 (2017). https://doi.org/10.22331/q-2017-07-28-22

]]>We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

]]>Measurements of an object's temperature are important in many disciplines, from astronomy to engineering, as are estimates of an object's spatial configuration. We present the quantum optimal estimator for the temperature of a distant body based on the black body radiation received in the far-field. We also show how to perform separable quantum optimal estimates of the spatial configuration of a distant object, i.e. imaging. In doing so we necessarily deal with multi-parameter quantum estimation of incompatible observables, a problem that is poorly understood. We compare our optimal observables to the two mode analogue of lensed imaging and find that the latter is far from optimal, even when compared to measurements which are separable. To prove the optimality of the estimators we show that they minimise the cost function weighted by the quantum Fisher information---this is equivalent to maximising the average fidelity between the actual state and the estimated one.

Quantum 1, 21 (2017). https://doi.org/10.22331/q-2017-07-26-21

]]>Measurements of an object's temperature are important in many disciplines, from astronomy to engineering, as are estimates of an object's spatial configuration. We present the quantum optimal estimator for the temperature of a distant body based on the black body radiation received in the far-field. We also show how to perform separable quantum optimal estimates of the spatial configuration of a distant object, i.e. imaging. In doing so we necessarily deal with multi-parameter quantum estimation of incompatible observables, a problem that is poorly understood. We compare our optimal observables to the two mode analogue of lensed imaging and find that the latter is far from optimal, even when compared to measurements which are separable. To prove the optimality of the estimators we show that they minimise the cost function weighted by the quantum Fisher information---this is equivalent to maximising the average fidelity between the actual state and the estimated one.

]]>The notions of error and disturbance appearing in quantum uncertainty relations are often quantified by the discrepancy of a physical quantity from its ideal value. However, these real and ideal values are not the outcomes of simultaneous measurements, and comparing the values of unmeasured observables is not necessarily meaningful according to quantum theory. To overcome these conceptual difficulties, we take a different approach and define error and disturbance in an operational manner. In particular, we formulate both in terms of the probability that one can successfully distinguish the actual measurement device from the relevant hypothetical ideal by any experimental test whatsoever. This definition itself does not rely on the formalism of quantum theory, avoiding many of the conceptual difficulties of usual definitions. We then derive new Heisenberg-type uncertainty relations for both joint measurability and the error-disturbance tradeoff for arbitrary observables of finite-dimensional systems, as well as for the case of position and momentum. Our relations may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak any information about conjugate observables to the environment. We also show that Englert's wave-particle duality relation [PRL 77, 2154 (1996)] can be viewed as an error-disturbance uncertainty relation.

Quantum 1, 20 (2017). https://doi.org/10.22331/q-2017-07-25-20

]]>The notions of error and disturbance appearing in quantum uncertainty relations are often quantified by the discrepancy of a physical quantity from its ideal value. However, these real and ideal values are not the outcomes of simultaneous measurements, and comparing the values of unmeasured observables is not necessarily meaningful according to quantum theory. To overcome these conceptual difficulties, we take a different approach and define error and disturbance in an operational manner. In particular, we formulate both in terms of the probability that one can successfully distinguish the actual measurement device from the relevant hypothetical ideal by any experimental test whatsoever. This definition itself does not rely on the formalism of quantum theory, avoiding many of the conceptual difficulties of usual definitions. We then derive new Heisenberg-type uncertainty relations for both joint measurability and the error-disturbance tradeoff for arbitrary observables of finite-dimensional systems, as well as for the case of position and momentum. Our relations may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak any information about conjugate observables to the environment. We also show that Englert's wave-particle duality relation [PRL 77, 2154 (1996)] can be viewed as an error-disturbance uncertainty relation.

]]>We define the hitting time for a model of continuous-time open quantum walks in terms of quantum jumps. Our starting point is a master equation in Lindblad form, which can be taken as the quantum analogue of the rate equation for a classical continuous-time Markov chain. The quantum jump method is well known in the quantum optics community and has also been applied to simulate open quantum walks in discrete time. This method however, is well-suited to continuous-time problems. It is shown here that a continuous-time hitting problem is amenable to analysis via quantum jumps: The hitting time can be defined as the time of the first jump. Using this fact, we derive the distribution of hitting times and explicit exressions for its statistical moments. Simple examples are considered to illustrate the final results. We then show that the hitting statistics obtained via quantum jumps is consistent with a previous definition for a measured walk in discrete time [Phys. Rev. A 73, 032341 (2006)] (when generalised to allow for non-unitary evolution and in the limit of small time steps). A caveat of the quantum-jump approach is that it relies on the final state (the state which we want to hit) to share only incoherent edges with other vertices in the graph. We propose a simple remedy to restore the applicability of quantum jumps when this is not the case and show that the hitting-time statistics will again converge to that obtained from the measured discrete walk in appropriate limits.

Quantum 1, 19 (2017). https://doi.org/10.22331/q-2017-07-21-19

]]>We define the hitting time for a model of continuous-time open quantum walks in terms of quantum jumps. Our starting point is a master equation in Lindblad form, which can be taken as the quantum analogue of the rate equation for a classical continuous-time Markov chain. The quantum jump method is well known in the quantum optics community and has also been applied to simulate open quantum walks in discrete time. This method however, is well-suited to continuous-time problems. It is shown here that a continuous-time hitting problem is amenable to analysis via quantum jumps: The hitting time can be defined as the time of the first jump. Using this fact, we derive the distribution of hitting times and explicit exressions for its statistical moments. Simple examples are considered to illustrate the final results. We then show that the hitting statistics obtained via quantum jumps is consistent with a previous definition for a measured walk in discrete time [Phys. Rev. A 73, 032341 (2006)] (when generalised to allow for non-unitary evolution and in the limit of small time steps). A caveat of the quantum-jump approach is that it relies on the final state (the state which we want to hit) to share only incoherent edges with other vertices in the graph. We propose a simple remedy to restore the applicability of quantum jumps when this is not the case and show that the hitting-time statistics will again converge to that obtained from the measured discrete walk in appropriate limits.

]]>We investigate decoupling, one of the most important primitives in quantum Shannon theory, by replacing the uniformly distributed random unitaries commonly used to achieve the protocol, with repeated applications of random unitaries diagonal in the Pauli-$Z$ and -$X$ bases. This strategy was recently shown to achieve an approximate unitary $2$-design after a number of repetitions of the process, which implies that the strategy gradually achieves decoupling. Here, we prove that even fewer repetitions of the process achieve decoupling at the same rate as that with the uniform ones, showing that rather imprecise approximations of unitary $2$-designs are sufficient for decoupling. We also briefly discuss efficient implementations of them and implications of our decoupling theorem to coherent state merging and relative thermalisation.

Quantum 1, 18 (2017). https://doi.org/10.22331/q-2017-07-21-18

]]>We investigate decoupling, one of the most important primitives in quantum Shannon theory, by replacing the uniformly distributed random unitaries commonly used to achieve the protocol, with repeated applications of random unitaries diagonal in the Pauli-$Z$ and -$X$ bases. This strategy was recently shown to achieve an approximate unitary $2$-design after a number of repetitions of the process, which implies that the strategy gradually achieves decoupling. Here, we prove that even fewer repetitions of the process achieve decoupling at the same rate as that with the uniform ones, showing that rather imprecise approximations of unitary $2$-designs are sufficient for decoupling. We also briefly discuss efficient implementations of them and implications of our decoupling theorem to coherent state merging and relative thermalisation.

]]>We introduce a multi-mode squeezing coefficient to characterize entanglement in $N$-partite continuous-variable systems. The coefficient relates to the squeezing of collective observables in the $2N$-dimensional phase space and can be readily extracted from the covariance matrix. Simple extensions further permit to reveal entanglement within specific partitions of a multipartite system. Applications with nonlinear observables allow for the detection of non-Gaussian entanglement.

Quantum 1, 17 (2017). https://doi.org/10.22331/q-2017-07-14-17

]]>We introduce a multi-mode squeezing coefficient to characterize entanglement in $N$-partite continuous-variable systems. The coefficient relates to the squeezing of collective observables in the $2N$-dimensional phase space and can be readily extracted from the covariance matrix. Simple extensions further permit to reveal entanglement within specific partitions of a multipartite system. Applications with nonlinear observables allow for the detection of non-Gaussian entanglement.

]]>In this work we consider the ground space connectivity problem for commuting local Hamiltonians. The ground space connectivity problem asks whether it is possible to go from one (efficiently preparable) state to another by applying a polynomial length sequence of 2-qubit unitaries while remaining at all times in a state with low energy for a given Hamiltonian $H$. It was shown in [Gharibian and Sikora, ICALP15] that this problem is QCMA-complete for general local Hamiltonians, where QCMA is defined as QMA with a classical witness and BQP verifier. Here we show that the commuting version of the problem is also QCMA-complete. This provides one of the first examples where commuting local Hamiltonians exhibit complexity theoretic hardness equivalent to general local Hamiltonians.

Quantum 1, 16 (2017). https://doi.org/10.22331/q-2017-07-14-16

]]>In this work we consider the ground space connectivity problem for commuting local Hamiltonians. The ground space connectivity problem asks whether it is possible to go from one (efficiently preparable) state to another by applying a polynomial length sequence of 2-qubit unitaries while remaining at all times in a state with low energy for a given Hamiltonian $H$. It was shown in [Gharibian and Sikora, ICALP15] that this problem is QCMA-complete for general local Hamiltonians, where QCMA is defined as QMA with a classical witness and BQP verifier. Here we show that the commuting version of the problem is also QCMA-complete. This provides one of the first examples where commuting local Hamiltonians exhibit complexity theoretic hardness equivalent to general local Hamiltonians.

]]>*This is an Editorial on "Classification of all alternatives to the Born rule in terms of informational properties" by Thomas D. Galley and Lluis Masanes, published in Quantum 1, 15 (2017).*

Quantum Views 1, 2 (2017).

https://doi.org/10.22331/qv-2017-07-14-2

**By Eric Cavalcanti, Centre for Quantum Dynamics, Griffith University.**

One of the burning questions within quantum foundations is “Why the Quantum?” — what makes quantum theory special, singling it out from the space of possible theories?

The celebrated Gleason’s theorem is one of the earliest in a class of results that select some parts of the quantum formalism and aim to derive the rest from it. Gleason shows that if we assume the quantum representation of measurement outcomes as projectors on a Hilbert space, then any noncontextual assignment of probabilities has the same form as the quantum Born rule. Others, such as Deutsch and Zurek, have proposed derivations of the Born rule from the structure of the quantum state space and dynamics (plus some extra assumptions). One of the aims of this latter approach is to resolve the measurement problem within an Everettian “no-collapse” interpretation. Whether they achieve that aim, however, remains a matter of controversy.

The present paper likewise starts from the assumption that states and transformations have the same structure as in quantum theory, and asks what are all possible alternatives to represent measurements and probability rules compatible with those. Given this classification, what principles could single out the quantum Born rule?

The work is set within the framework of *generalised probabilistic theories* (GPTs). Based on work from Lucien Hardy, it provides a bare-bones description of physical theories through their operational implications, as tools to calculate probabilities for outcomes of measurements, given the state preparations and transformations allowed by the theory. Finding a resolution to “Why the Quantum” then reduces to finding “reasonable” physical principles that allow one to single out quantum theory from the space of GPTs.

Galley and Masanes draw heavily upon group representation to show that all the alternatives compatible with the structure of the quantum state space and dynamics are in correspondence to a certain class of representations of the unitary group. This provides a full classification of all theories with alternative measurement postulates to the standard quantum ones. Quantum theory is then picked out as the unique theory that satisfies two hypotheses: no-restriction on measurements and pure-state bit symmetry.

Informally, ‘no-restriction’ postulates that all possible measurements on a state space are allowed by the theory. Bit symmetry is the requirement that any pair of distinguishable states can be mapped into any other pair of distinguishable states via an allowed transformation. While no restriction has a less direct operational meaning, bit-symmetry has an information-theoretic interpretation, and is related to the possibility of reversible computation.

The present work represents a significant technical contribution to the field of generalised probabilistic theories, and opens several questions, including the effect of including measurement update rules, composition of systems, and the information-processing capabilities of the classes of alternative theories introduced here.

This editorial is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.

]]>The standard postulates of quantum theory can be divided into two groups: the first one characterizes the structure and dynamics of pure states, while the second one specifies the structure of measurements and the corresponding probabilities. In this work we keep the first group of postulates and characterize all alternatives to the second group that give rise to finite-dimensional sets of mixed states. We prove a correspondence between all these alternatives and a class of representations of the unitary group. Some features of these probabilistic theories are identical to quantum theory, but there are important differences in others. For example, some theories have three perfectly distinguishable states in a two-dimensional Hilbert space. Others have exotic properties such as lack of bit symmetry, the violation of no simultaneous encoding (a property similar to information causality) and the existence of maximal measurements without phase groups. We also analyze which of these properties single out the Born rule.

Quantum 1, 15 (2017). https://doi.org/10.22331/q-2017-07-14-15

]]>The standard postulates of quantum theory can be divided into two groups: the first one characterizes the structure and dynamics of pure states, while the second one specifies the structure of measurements and the corresponding probabilities. In this work we keep the first group of postulates and characterize all alternatives to the second group that give rise to finite-dimensional sets of mixed states. We prove a correspondence between all these alternatives and a class of representations of the unitary group. Some features of these probabilistic theories are identical to quantum theory, but there are important differences in others. For example, some theories have three perfectly distinguishable states in a two-dimensional Hilbert space. Others have exotic properties such as lack of bit symmetry, the violation of no simultaneous encoding (a property similar to information causality) and the existence of maximal measurements without phase groups. We also analyze which of these properties single out the Born rule.

]]>In this work we present a security analysis for quantum key distribution, establishing a rigorous tradeoff between various protocol and security parameters for a class of entanglement-based and prepare-and-measure protocols. The goal of this paper is twofold: 1) to review and clarify the stateof-the-art security analysis based on entropic uncertainty relations, and 2) to provide an accessible resource for researchers interested in a security analysis of quantum cryptographic protocols that takes into account finite resource effects. For this purpose we collect and clarify several arguments spread in the literature on the subject with the goal of making this treatment largely self-contained. More precisely, we focus on a class of prepare-and-measure protocols based on the Bennett-Brassard (BB84) protocol as well as a class of entanglement-based protocols similar to the Bennett-Brassard-Mermin (BBM92) protocol. We carefully formalize the different steps in these protocols, including randomization, measurement, parameter estimation, error correction and privacy amplification, allowing us to be mathematically precise throughout the security analysis. We start from an operational definition of what it means for a quantum key distribution protocol to be secure and derive simple conditions that serve as sufficient condition for secrecy and correctness. We then derive and eventually discuss tradeoff relations between the block length of the classical computation, the noise tolerance, the secret key length and the security parameters for our protocols. Our results significantly improve upon previously reported tradeoffs.

Quantum 1, 14 (2017). https://doi.org/10.22331/q-2017-07-14-14

]]>In this work we present a security analysis for quantum key distribution, establishing a rigorous tradeoff between various protocol and security parameters for a class of entanglement-based and prepare-and-measure protocols. The goal of this paper is twofold: 1) to review and clarify the stateof-the-art security analysis based on entropic uncertainty relations, and 2) to provide an accessible resource for researchers interested in a security analysis of quantum cryptographic protocols that takes into account finite resource effects. For this purpose we collect and clarify several arguments spread in the literature on the subject with the goal of making this treatment largely self-contained. More precisely, we focus on a class of prepare-and-measure protocols based on the Bennett-Brassard (BB84) protocol as well as a class of entanglement-based protocols similar to the Bennett-Brassard-Mermin (BBM92) protocol. We carefully formalize the different steps in these protocols, including randomization, measurement, parameter estimation, error correction and privacy amplification, allowing us to be mathematically precise throughout the security analysis. We start from an operational definition of what it means for a quantum key distribution protocol to be secure and derive simple conditions that serve as sufficient condition for secrecy and correctness. We then derive and eventually discuss tradeoff relations between the block length of the classical computation, the noise tolerance, the secret key length and the security parameters for our protocols. Our results significantly improve upon previously reported tradeoffs.

]]>Macro-realism is the position that certain macroscopic observables must always possess definite values: e.g. the table is in some definite position, even if we do not know what that is precisely. The traditional understanding is that by assuming macro-realism one can derive the Leggett-Garg inequalities, which constrain the possible statistics from certain experiments. Since quantum experiments can violate the Leggett-Garg inequalities, this is taken to rule out the possibility of macro-realism in a quantum universe. However, recent analyses have exposed loopholes in the Leggett-Garg argument, which allow many types of macro-realism to be compatible with quantum theory and hence violation of the Leggett-Garg inequalities. This paper takes a different approach to ruling out macro-realism and the result is a no-go theorem for macro-realism in quantum theory that is stronger than the Leggett-Garg argument. This approach uses the framework of ontological models: an elegant way to reason about foundational issues in quantum theory which has successfully produced many other recent results, such as the PBR theorem.

Quantum 1, 13 (2017). https://doi.org/10.22331/q-2017-07-14-13

]]>Macro-realism is the position that certain macroscopic observables must always possess definite values: e.g. the table is in some definite position, even if we do not know what that is precisely. The traditional understanding is that by assuming macro-realism one can derive the Leggett-Garg inequalities, which constrain the possible statistics from certain experiments. Since quantum experiments can violate the Leggett-Garg inequalities, this is taken to rule out the possibility of macro-realism in a quantum universe. However, recent analyses have exposed loopholes in the Leggett-Garg argument, which allow many types of macro-realism to be compatible with quantum theory and hence violation of the Leggett-Garg inequalities. This paper takes a different approach to ruling out macro-realism and the result is a no-go theorem for macro-realism in quantum theory that is stronger than the Leggett-Garg argument. This approach uses the framework of ontological models: an elegant way to reason about foundational issues in quantum theory which has successfully produced many other recent results, such as the PBR theorem.

]]>First, a clarification: Quantum’s social media accounts are currently managed by the Executive Board, and are their exclusive responsibility. They do not represent the views of Quantum as a whole unless explicitly said otherwise. In particular, sharing of opinion articles does not imply endorsement.

Content shared by Quantum generally falls into the following categories:

**Papers**published in Quantum and followups (such as editorials, perspectives and further media coverage).

**News**related to Quantum (for example updates on policies, outreach events and media coverage).

- News about
**quantum science**that are of interest to the larger community, at our discretion.

Since we aim to be a international venue for quantum sciences, without any regional bias, for now we have decided not to advertise local workshops, initiatives and programs, no matter how personally supportive we may be of them. The only workshops and events mentioned are in the context of Quantum doing outreach there and generally only after the event has taken place.

- Analyses and opinion articles about
**life in academia**, at our discretion.

Examples of topics that we may touch are excessive pressure on academics, mental health in academia, endemic problems in the job market, harassement and systemic discrimination. Whenever possible, we will favour general analyses of a phenomenon rather than news of a particular instance.

- News, analyses and opinion articles about
**academic publishing and open science**, at our discretion (here we may refer to individual events).

Quantum may invite individuals to **write opinion articles** on any of the subjects described above. Those will be published on Quantum’s website (offically in the online journal Quantum Views under a CC BY 4.0 licence, such that copyright remains with the authors). We expect most of these articles to be editorial-like viewpoints (“perspectives”) about papers published in Quantum.

We will review these guidelines at the end of 2017.

]]>SciPost‘s Jean-Sébastien Caux and Quantum’s Christian Gogolin were both invited to speak about “the future of scientific publishing” at a dedicated session on that topic during YRM 2017 in Tarragona.

SciPost and Quantum, while independent endeavors, share many of their core values and have many common goals. Together with the participants of YRM, topics such as the opportunities and dangers of open-access publishing, the relevance of new technologies for publishing, and the influence of funding policies on publishing were discussed.

We thank the participants of YRM 2017 for the very interesting discussions and the organizers for the opportunity to participate!

]]>Quantum has been assigned an International Standard Serial Number (ISSN) by the ISSN International Centre.

The ISSN is similar to the more widely known ISBN, which is used to uniquely identify books. The ISSN achieves the same, but for periodically appearing publications, like journals. Quantum is thus from now on uniquely identified by its ISSN **2521-327X**.

Having obtained an ISSN enables us to proceed with developing Quantum further and will allow us to apply for inclusion in the Web of Science and the Directory of Open Access Journals.

Papers already published in Quantum (and the respective .bib files) have been updated to include the ISSN.

]]>We investigate an approach to universal quantum computation based on the modulation of longitudinal qubit-oscillator coupling. We show how to realize a controlled-phase gate by simultaneously modulating the longitudinal coupling of two qubits to a common oscillator mode. In contrast to the more familiar transversal qubit-oscillator coupling, the magnitude of the effective qubit-qubit interaction does not rely on a small perturbative parameter. As a result, this effective interaction strength can be made large, leading to short gate times and high gate fidelities. We moreover show how the gate infidelity can be exponentially suppressed with squeezing and how the entangling gate can be generalized to qubits coupled to separate oscillators. Our proposal can be realized in multiple physical platforms for quantum computing, including superconducting and spin qubits.

Quantum 1, 11 (2017). https://doi.org/10.22331/q-2017-05-11-11

]]>We investigate an approach to universal quantum computation based on the modulation of longitudinal qubit-oscillator coupling. We show how to realize a controlled-phase gate by simultaneously modulating the longitudinal coupling of two qubits to a common oscillator mode. In contrast to the more familiar transversal qubit-oscillator coupling, the magnitude of the effective qubit-qubit interaction does not rely on a small perturbative parameter. As a result, this effective interaction strength can be made large, leading to short gate times and high gate fidelities. We moreover show how the gate infidelity can be exponentially suppressed with squeezing and how the entangling gate can be generalized to qubits coupled to separate oscillators. Our proposal can be realized in multiple physical platforms for quantum computing, including superconducting and spin qubits.

]]>*This is a Perspective on "Achieving quantum supremacy with sparse and noisy commuting quantum computations" by Michael J. Bremner, Ashley Montanaro, and Dan J. Shepherd, published in Quantum 1, 8 (2017).*

Quantum Views 1, 1 (2017).

https://doi.org/10.22331/qv-2017-04-25-1

**By Bill Fefferman, University of Maryland/NIST.**

One of the most active areas of research in quantum information is “quantum supremacy”. The central goal is to exhibit a provable quantum speedup over classical computation using the restrictive resources of existing or near-term quantum experiments. Starting with work of Bremner, Jozsa, and Shepherd it has been established that even non-universal commuting classes of quantum computations known as “Instantaneous Quantum Polynomial-time” (or IQP) circuits are capable of sampling from distributions that cannot be exactly sampled classically, under mild complexity assumptions.

As quantum supremacy leaps from the domain of theory to experiment, the major open questions involve understanding which intermediate models are capable of demonstrating supremacy, and how to account for experimentally realistic noise in these models. As a starting point, a primary objective has been to strengthen these sampling separations to hold even if the classical sampler is not required to sample exactly from the quantum outcome distribution but instead samples from a distribution close in total variation distance.

Follow-up work of Bremner, Montanaro and Shepherd addressed this stronger scenario. They show that the outcome distribution of IQP circuits cannot be approximately sampled classically, assuming a conjecture concerning the hardness of estimating the complex-temperature partition function of certain random instances of the Ising model. Understanding the feasibility and implications of this conjecture, as well as related conjectures of Aaronson and Arkhipov regarding estimating the permanent of random matrices, has since become one of the most important challenges in quantum complexity theory.

While not settling these conjectures, the present paper makes important contributions to our understanding of approximate sampling hardness results. The first main result extends the prior work to give similar conjectural evidence that approximate sampling from the output distribution of so-called “sparse” IQP circuits is classically hard. A randomly chosen “sparse” IQP circuit on n qubits consists of $O(n \log(n) )$ 2-qubit gates and can be implemented on a square lattice with a shallow depth circuit, with high probability. These structural properties reduce the experimental barrier of implementing such supremacy results and bring the proposal closer to the realm of currently implementable experiments.

The second result studies IQP sampling under a natural noise model, in which independent depolarizing noise is applied to every qubit at the end of the circuit. It is proven that the resulting output distribution becomes easy to sample from classically, illuminating the fragile nature of supremacy results. However, the authors develop new means for protecting against this depolarizing noise model, using ideas from classical error-correcting codes. Crucially, this result is achieved without the full overhead required by traditional quantum fault tolerance. It is very likely that these error-correcting tools will find use in future results on noise-tolerant quantum supremacy proposals.

Taken as a whole, these results give a fresh perspective to some of the most foundational questions in quantum computation—where quantum speedups come from, and under which experimentally motivated settings we can hope to observe these speedups. As such, this work offers a solid contribution to the quantum supremacy literature, and develops useful tools for analyzing candidates for future supremacy experiments.

This perspective is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.

]]>To study which are the most general causal structures which are compatible with local quantum mechanics, Oreshkov et al. introduced the notion of a process: a resource shared between some parties that allows for quantum communication between them without a predetermined causal order. These processes can be used to perform several tasks that are impossible in standard quantum mechanics: they allow for the violation of causal inequalities, and provide an advantage for computational and communication complexity. Nonetheless, no process that can be used to violate a causal inequality is known to be physically implementable. There is therefore considerable interest in determining which processes are physical and which are just mathematical artefacts of the framework. Here we make the first step in this direction, by proposing a purification postulate: processes are physical only if they are purifiable. We derive necessary conditions for a process to be purifiable, and show that several known processes do not satisfy them.

Quantum 1, 10 (2017). https://doi.org/10.22331/q-2017-04-26-10

]]>To study which are the most general causal structures which are compatible with local quantum mechanics, Oreshkov et al. introduced the notion of a process: a resource shared between some parties that allows for quantum communication between them without a predetermined causal order. These processes can be used to perform several tasks that are impossible in standard quantum mechanics: they allow for the violation of causal inequalities, and provide an advantage for computational and communication complexity. Nonetheless, no process that can be used to violate a causal inequality is known to be physically implementable. There is therefore considerable interest in determining which processes are physical and which are just mathematical artefacts of the framework. Here we make the first step in this direction, by proposing a purification postulate: processes are physical only if they are purifiable. We derive necessary conditions for a process to be purifiable, and show that several known processes do not satisfy them.

]]>We investigate the finite-density phase diagram of a non-abelian $SU(2)$ lattice gauge theory in $(1+1)$-dimensions using tensor network methods. We numerically characterise the phase diagram as a function of the matter filling and of the matter-field coupling, identifying different phases, some of them appearing only at finite densities. For weak matter-field coupling we find a meson BCS liquid phase, which is confirmed by second-order analytical perturbation theory. At unit filling and for strong coupling, the system undergoes a phase transition to a charge density wave of single-site (spin-0) mesons via spontaneous chiral symmetry breaking. At finite densities, the chiral symmetry is restored almost everywhere, and the meson BCS liquid becomes a simple liquid at strong couplings, with the exception of filling two-thirds, where a charge density wave of mesons spreading over neighbouring sites appears. Finally, we identify two tri-critical points between the chiral and the two liquid phases which are compatible with a $SU(2)_2$ Wess-Zumino-Novikov-Witten model. Here we do not perform the continuum limit but we explicitly address the global $U(1)$ charge conservation symmetry.

Quantum 1, 9 (2017). https://doi.org/10.22331/q-2017-04-25-9

]]>We investigate the finite-density phase diagram of a non-abelian $SU(2)$ lattice gauge theory in $(1+1)$-dimensions using tensor network methods. We numerically characterise the phase diagram as a function of the matter filling and of the matter-field coupling, identifying different phases, some of them appearing only at finite densities. For weak matter-field coupling we find a meson BCS liquid phase, which is confirmed by second-order analytical perturbation theory. At unit filling and for strong coupling, the system undergoes a phase transition to a charge density wave of single-site (spin-0) mesons via spontaneous chiral symmetry breaking. At finite densities, the chiral symmetry is restored almost everywhere, and the meson BCS liquid becomes a simple liquid at strong couplings, with the exception of filling two-thirds, where a charge density wave of mesons spreading over neighbouring sites appears. Finally, we identify two tri-critical points between the chiral and the two liquid phases which are compatible with a $SU(2)_2$ Wess-Zumino-Novikov-Witten model. Here we do not perform the continuum limit but we explicitly address the global $U(1)$ charge conservation symmetry.

]]>The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.

Quantum 1, 8 (2017). https://doi.org/10.22331/q-2017-04-25-8

]]>The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.

]]>We give a complete proposal showing how to detect the non-classical nature of photonic states with naked eyes as detectors. The enabling technology is a sub-Poissonian photonic state that is obtained from single photons, displacement operations in phase space and basic non-photon-number-resolving detectors. We present a detailed statistical analysis of our proposal including imperfect photon creation and detection and a realistic model of the human eye. We conclude that a few tens of hours are sufficient to certify non-classical light with the human eye with a p-value of 10%.

Quantum 1, 7 (2017). https://doi.org/10.22331/q-2017-04-25-7

]]>We give a complete proposal showing how to detect the non-classical nature of photonic states with naked eyes as detectors. The enabling technology is a sub-Poissonian photonic state that is obtained from single photons, displacement operations in phase space and basic non-photon-number-resolving detectors. We present a detailed statistical analysis of our proposal including imperfect photon creation and detection and a realistic model of the human eye. We conclude that a few tens of hours are sufficient to certify non-classical light with the human eye with a p-value of 10%.

]]>We apply classical algorithms for approximately solving constraint satisfaction problems to find bounds on extremal eigenvalues of local Hamiltonians. We consider spin Hamiltonians for which we have an upper bound on the number of terms in which each spin participates, and find extensive bounds for the operator norm and ground-state energy of such Hamiltonians under this constraint. In each case the bound is achieved by a product state which can be found efficiently using a classical algorithm.

Quantum 1, 6 (2017). https://doi.org/10.22331/q-2017-04-25-6

]]>We apply classical algorithms for approximately solving constraint satisfaction problems to find bounds on extremal eigenvalues of local Hamiltonians. We consider spin Hamiltonians for which we have an upper bound on the number of terms in which each spin participates, and find extensive bounds for the operator norm and ground-state energy of such Hamiltonians under this constraint. In each case the bound is achieved by a product state which can be found efficiently using a classical algorithm.

]]>Characterizing quantum systems through experimental data is critical to applications as diverse as metrology and quantum computing. Analyzing this experimental data in a robust and reproducible manner is made challenging, however, by the lack of readily-available software for performing principled statistical analysis. We improve the robustness and reproducibility of characterization by introducing an open-source library, QInfer, to address this need. Our library makes it easy to analyze data from tomography, randomized benchmarking, and Hamiltonian learning experiments either in post-processing, or online as data is acquired. QInfer also provides functionality for predicting the performance of proposed experimental protocols from simulated runs. By delivering easy-to-use characterization tools based on principled statistical analysis, QInfer helps address many outstanding challenges facing quantum technology.

Quantum 1, 5 (2017). https://doi.org/10.22331/q-2017-04-25-5

]]>Characterizing quantum systems through experimental data is critical to applications as diverse as metrology and quantum computing. Analyzing this experimental data in a robust and reproducible manner is made challenging, however, by the lack of readily-available software for performing principled statistical analysis. We improve the robustness and reproducibility of characterization by introducing an open-source library, QInfer, to address this need. Our library makes it easy to analyze data from tomography, randomized benchmarking, and Hamiltonian learning experiments either in post-processing, or online as data is acquired. QInfer also provides functionality for predicting the performance of proposed experimental protocols from simulated runs. By delivering easy-to-use characterization tools based on principled statistical analysis, QInfer helps address many outstanding challenges facing quantum technology.

]]>We study the fundamental limits on the reliable storage of quantum information in lattices of qubits by deriving tradeoff bounds for approximate quantum error correcting codes. We introduce a notion of local approximate correctability and code distance, and give a number of equivalent formulations thereof, generalizing various exact error-correction criteria. Our tradeoff bounds relate the number of physical qubits $n$, the number of encoded qubits $k$, the code distance $d$, the accuracy parameter $\delta$ that quantifies how well the erasure channel can be reversed, and the locality parameter $\ell$ that specifies the length scale at which the recovery operation can be done. In a regime where the recovery is successful to accuracy $\epsilon$ that is exponentially small in $\ell$, which is the case for perturbations of local commuting projector codes, our bound reads $kd^{\frac{2}{D-1}} \le O\bigl(n (\log n)^{\frac{2D}{D-1}} \bigr)$ for codes on $D$-dimensional lattices of Euclidean metric. We also find that the code distance of any local approximate code cannot exceed $O\bigl(\ell n^{(D-1)/D}\bigr)$ if $\delta \le O(\ell n^{-1/D})$. As a corollary of our formulation of correctability in terms of logical operator avoidance, we show that the code distance $d$ and the size $\tilde d$ of a minimal region that can support all approximate logical operators satisfies $\tilde d d^{\frac{1}{D-1}}\le O\bigl( n \ell^{\frac{D}{D-1}} \bigr)$, where the logical operators are accurate up to $O\bigl( ( n \delta / d )^{1/2}\bigr)$ in operator norm. Finally, we prove that for two-dimensional systems if logical operators can be approximated by operators supported on constant-width flexible strings, then the dimension of the code space must be bounded. This supports one of the assumptions of algebraic anyon theories, that there exist only finitely many anyon types.

Quantum 1, 4 (2017). https://doi.org/10.22331/q-2017-04-25-4

]]>We study the fundamental limits on the reliable storage of quantum information in lattices of qubits by deriving tradeoff bounds for approximate quantum error correcting codes. We introduce a notion of local approximate correctability and code distance, and give a number of equivalent formulations thereof, generalizing various exact error-correction criteria. Our tradeoff bounds relate the number of physical qubits $n$, the number of encoded qubits $k$, the code distance $d$, the accuracy parameter $\delta$ that quantifies how well the erasure channel can be reversed, and the locality parameter $\ell$ that specifies the length scale at which the recovery operation can be done. In a regime where the recovery is successful to accuracy $\epsilon$ that is exponentially small in $\ell$, which is the case for perturbations of local commuting projector codes, our bound reads $kd^{\frac{2}{D-1}} \le O\bigl(n (\log n)^{\frac{2D}{D-1}} \bigr)$ for codes on $D$-dimensional lattices of Euclidean metric. We also find that the code distance of any local approximate code cannot exceed $O\bigl(\ell n^{(D-1)/D}\bigr)$ if $\delta \le O(\ell n^{-1/D})$. As a corollary of our formulation of correctability in terms of logical operator avoidance, we show that the code distance $d$ and the size $\tilde d$ of a minimal region that can support all approximate logical operators satisfies $\tilde d d^{\frac{1}{D-1}}\le O\bigl( n \ell^{\frac{D}{D-1}} \bigr)$, where the logical operators are accurate up to $O\bigl( ( n \delta / d )^{1/2}\bigr)$ in operator norm. Finally, we prove that for two-dimensional systems if logical operators can be approximated by operators supported on constant-width flexible strings, then the dimension of the code space must be bounded. This supports one of the assumptions of algebraic anyon theories, that there exist only finitely many anyon types.

]]>We consider the problem of reproducing the correlations obtained by arbitrary local projective measurements on the two-qubit Werner state $\rho = v |\psi_- \rangle \langle\psi_- | + (1- v ) \frac{1}{4}$ via a local hidden variable (LHV) model, where $|\psi_- \rangle$ denotes the singlet state. We show analytically that these correlations are local for $ v = 999\times689\times{10^{-6}}$ $\cos^2(\pi/50) \simeq 0.6829$. In turn, as this problem is closely related to a purely mathematical one formulated by Grothendieck, our result implies a new bound on the Grothendieck constant $K_G(3) \leq 1/v \simeq 1.4644$. We also present a LHV model for reproducing the statistics of arbitrary POVMs on the Werner state for $v \simeq 0.4553$. The techniques we develop can be adapted to construct LHV models for other entangled states, as well as bounding other Grothendieck constants.

Quantum 1, 3 (2017). https://doi.org/10.22331/q-2017-04-25-3

]]>We consider the problem of reproducing the correlations obtained by arbitrary local projective measurements on the two-qubit Werner state $\rho = v |\psi_- \rangle \langle\psi_- | + (1- v ) \frac{1}{4}$ via a local hidden variable (LHV) model, where $|\psi_- \rangle$ denotes the singlet state. We show analytically that these correlations are local for $ v = 999\times689\times{10^{-6}}$ $\cos^2(\pi/50) \simeq 0.6829$. In turn, as this problem is closely related to a purely mathematical one formulated by Grothendieck, our result implies a new bound on the Grothendieck constant $K_G(3) \leq 1/v \simeq 1.4644$. We also present a LHV model for reproducing the statistics of arbitrary POVMs on the Werner state for $v \simeq 0.4553$. The techniques we develop can be adapted to construct LHV models for other entangled states, as well as bounding other Grothendieck constants.

]]>The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.

Quantum 1, 2 (2017). https://doi.org/10.22331/q-2017-04-25-2

]]>The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.

]]>