We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

Quantum 1, 22 (2017). https://doi.org/10.22331/q-2017-07-28-22

]]>We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

]]>Measurements of an object's temperature are important in many disciplines, from astronomy to engineering, as are estimates of an object's spatial configuration. We present the quantum optimal estimator for the temperature of a distant body based on the black body radiation received in the far-field. We also show how to perform separable quantum optimal estimates of the spatial configuration of a distant object, i.e. imaging. In doing so we necessarily deal with multi-parameter quantum estimation of incompatible observables, a problem that is poorly understood. We compare our optimal observables to the two mode analogue of lensed imaging and find that the latter is far from optimal, even when compared to measurements which are separable. To prove the optimality of the estimators we show that they minimise the cost function weighted by the quantum Fisher information---this is equivalent to maximising the average fidelity between the actual state and the estimated one.

Quantum 1, 21 (2017). https://doi.org/10.22331/q-2017-07-26-21

]]>Measurements of an object's temperature are important in many disciplines, from astronomy to engineering, as are estimates of an object's spatial configuration. We present the quantum optimal estimator for the temperature of a distant body based on the black body radiation received in the far-field. We also show how to perform separable quantum optimal estimates of the spatial configuration of a distant object, i.e. imaging. In doing so we necessarily deal with multi-parameter quantum estimation of incompatible observables, a problem that is poorly understood. We compare our optimal observables to the two mode analogue of lensed imaging and find that the latter is far from optimal, even when compared to measurements which are separable. To prove the optimality of the estimators we show that they minimise the cost function weighted by the quantum Fisher information---this is equivalent to maximising the average fidelity between the actual state and the estimated one.

]]>The notions of error and disturbance appearing in quantum uncertainty relations are often quantified by the discrepancy of a physical quantity from its ideal value. However, these real and ideal values are not the outcomes of simultaneous measurements, and comparing the values of unmeasured observables is not necessarily meaningful according to quantum theory. To overcome these conceptual difficulties, we take a different approach and define error and disturbance in an operational manner. In particular, we formulate both in terms of the probability that one can successfully distinguish the actual measurement device from the relevant hypothetical ideal by any experimental test whatsoever. This definition itself does not rely on the formalism of quantum theory, avoiding many of the conceptual difficulties of usual definitions. We then derive new Heisenberg-type uncertainty relations for both joint measurability and the error-disturbance tradeoff for arbitrary observables of finite-dimensional systems, as well as for the case of position and momentum. Our relations may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak any information about conjugate observables to the environment. We also show that Englert's wave-particle duality relation [PRL 77, 2154 (1996)] can be viewed as an error-disturbance uncertainty relation.

Quantum 1, 20 (2017). https://doi.org/10.22331/q-2017-07-25-20

]]>The notions of error and disturbance appearing in quantum uncertainty relations are often quantified by the discrepancy of a physical quantity from its ideal value. However, these real and ideal values are not the outcomes of simultaneous measurements, and comparing the values of unmeasured observables is not necessarily meaningful according to quantum theory. To overcome these conceptual difficulties, we take a different approach and define error and disturbance in an operational manner. In particular, we formulate both in terms of the probability that one can successfully distinguish the actual measurement device from the relevant hypothetical ideal by any experimental test whatsoever. This definition itself does not rely on the formalism of quantum theory, avoiding many of the conceptual difficulties of usual definitions. We then derive new Heisenberg-type uncertainty relations for both joint measurability and the error-disturbance tradeoff for arbitrary observables of finite-dimensional systems, as well as for the case of position and momentum. Our relations may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak any information about conjugate observables to the environment. We also show that Englert's wave-particle duality relation [PRL 77, 2154 (1996)] can be viewed as an error-disturbance uncertainty relation.

]]>We define the hitting time for a model of continuous-time open quantum walks in terms of quantum jumps. Our starting point is a master equation in Lindblad form, which can be taken as the quantum analogue of the rate equation for a classical continuous-time Markov chain. The quantum jump method is well known in the quantum optics community and has also been applied to simulate open quantum walks in discrete time. This method however, is well-suited to continuous-time problems. It is shown here that a continuous-time hitting problem is amenable to analysis via quantum jumps: The hitting time can be defined as the time of the first jump. Using this fact, we derive the distribution of hitting times and explicit exressions for its statistical moments. Simple examples are considered to illustrate the final results. We then show that the hitting statistics obtained via quantum jumps is consistent with a previous definition for a measured walk in discrete time [Phys. Rev. A 73, 032341 (2006)] (when generalised to allow for non-unitary evolution and in the limit of small time steps). A caveat of the quantum-jump approach is that it relies on the final state (the state which we want to hit) to share only incoherent edges with other vertices in the graph. We propose a simple remedy to restore the applicability of quantum jumps when this is not the case and show that the hitting-time statistics will again converge to that obtained from the measured discrete walk in appropriate limits.

Quantum 1, 19 (2017). https://doi.org/10.22331/q-2017-07-21-19

]]>We define the hitting time for a model of continuous-time open quantum walks in terms of quantum jumps. Our starting point is a master equation in Lindblad form, which can be taken as the quantum analogue of the rate equation for a classical continuous-time Markov chain. The quantum jump method is well known in the quantum optics community and has also been applied to simulate open quantum walks in discrete time. This method however, is well-suited to continuous-time problems. It is shown here that a continuous-time hitting problem is amenable to analysis via quantum jumps: The hitting time can be defined as the time of the first jump. Using this fact, we derive the distribution of hitting times and explicit exressions for its statistical moments. Simple examples are considered to illustrate the final results. We then show that the hitting statistics obtained via quantum jumps is consistent with a previous definition for a measured walk in discrete time [Phys. Rev. A 73, 032341 (2006)] (when generalised to allow for non-unitary evolution and in the limit of small time steps). A caveat of the quantum-jump approach is that it relies on the final state (the state which we want to hit) to share only incoherent edges with other vertices in the graph. We propose a simple remedy to restore the applicability of quantum jumps when this is not the case and show that the hitting-time statistics will again converge to that obtained from the measured discrete walk in appropriate limits.

]]>We investigate decoupling, one of the most important primitives in quantum Shannon theory, by replacing the uniformly distributed random unitaries commonly used to achieve the protocol, with repeated applications of random unitaries diagonal in the Pauli-$Z$ and -$X$ bases. This strategy was recently shown to achieve an approximate unitary $2$-design after a number of repetitions of the process, which implies that the strategy gradually achieves decoupling. Here, we prove that even fewer repetitions of the process achieve decoupling at the same rate as that with the uniform ones, showing that rather imprecise approximations of unitary $2$-designs are sufficient for decoupling. We also briefly discuss efficient implementations of them and implications of our decoupling theorem to coherent state merging and relative thermalisation.

Quantum 1, 18 (2017). https://doi.org/10.22331/q-2017-07-21-18

]]>We investigate decoupling, one of the most important primitives in quantum Shannon theory, by replacing the uniformly distributed random unitaries commonly used to achieve the protocol, with repeated applications of random unitaries diagonal in the Pauli-$Z$ and -$X$ bases. This strategy was recently shown to achieve an approximate unitary $2$-design after a number of repetitions of the process, which implies that the strategy gradually achieves decoupling. Here, we prove that even fewer repetitions of the process achieve decoupling at the same rate as that with the uniform ones, showing that rather imprecise approximations of unitary $2$-designs are sufficient for decoupling. We also briefly discuss efficient implementations of them and implications of our decoupling theorem to coherent state merging and relative thermalisation.

]]>We introduce a multi-mode squeezing coefficient to characterize entanglement in $N$-partite continuous-variable systems. The coefficient relates to the squeezing of collective observables in the $2N$-dimensional phase space and can be readily extracted from the covariance matrix. Simple extensions further permit to reveal entanglement within specific partitions of a multipartite system. Applications with nonlinear observables allow for the detection of non-Gaussian entanglement.

Quantum 1, 17 (2017). https://doi.org/10.22331/q-2017-07-14-17

]]>We introduce a multi-mode squeezing coefficient to characterize entanglement in $N$-partite continuous-variable systems. The coefficient relates to the squeezing of collective observables in the $2N$-dimensional phase space and can be readily extracted from the covariance matrix. Simple extensions further permit to reveal entanglement within specific partitions of a multipartite system. Applications with nonlinear observables allow for the detection of non-Gaussian entanglement.

]]>In this work we consider the ground space connectivity problem for commuting local Hamiltonians. The ground space connectivity problem asks whether it is possible to go from one (efficiently preparable) state to another by applying a polynomial length sequence of 2-qubit unitaries while remaining at all times in a state with low energy for a given Hamiltonian $H$. It was shown in [Gharibian and Sikora, ICALP15] that this problem is QCMA-complete for general local Hamiltonians, where QCMA is defined as QMA with a classical witness and BQP verifier. Here we show that the commuting version of the problem is also QCMA-complete. This provides one of the first examples where commuting local Hamiltonians exhibit complexity theoretic hardness equivalent to general local Hamiltonians.

Quantum 1, 16 (2017). https://doi.org/10.22331/q-2017-07-14-16

]]>In this work we consider the ground space connectivity problem for commuting local Hamiltonians. The ground space connectivity problem asks whether it is possible to go from one (efficiently preparable) state to another by applying a polynomial length sequence of 2-qubit unitaries while remaining at all times in a state with low energy for a given Hamiltonian $H$. It was shown in [Gharibian and Sikora, ICALP15] that this problem is QCMA-complete for general local Hamiltonians, where QCMA is defined as QMA with a classical witness and BQP verifier. Here we show that the commuting version of the problem is also QCMA-complete. This provides one of the first examples where commuting local Hamiltonians exhibit complexity theoretic hardness equivalent to general local Hamiltonians.

]]>*This is an Editorial on "Classification of all alternatives to the Born rule in terms of informational properties" by Thomas D. Galley and Lluis Masanes, published in Quantum 1, 15 (2017).*

Quantum Views 1, 2 (2017).

https://doi.org/10.22331/qv-2017-07-14-2

**By Eric Cavalcanti, Centre for Quantum Dynamics, Griffith University.**

One of the burning questions within quantum foundations is “Why the Quantum?” — what makes quantum theory special, singling it out from the space of possible theories?

The celebrated Gleason’s theorem is one of the earliest in a class of results that select some parts of the quantum formalism and aim to derive the rest from it. Gleason shows that if we assume the quantum representation of measurement outcomes as projectors on a Hilbert space, then any noncontextual assignment of probabilities has the same form as the quantum Born rule. Others, such as Deutsch and Zurek, have proposed derivations of the Born rule from the structure of the quantum state space and dynamics (plus some extra assumptions). One of the aims of this latter approach is to resolve the measurement problem within an Everettian “no-collapse” interpretation. Whether they achieve that aim, however, remains a matter of controversy.

The present paper likewise starts from the assumption that states and transformations have the same structure as in quantum theory, and asks what are all possible alternatives to represent measurements and probability rules compatible with those. Given this classification, what principles could single out the quantum Born rule?

The work is set within the framework of *generalised probabilistic theories* (GPTs). Based on work from Lucien Hardy, it provides a bare-bones description of physical theories through their operational implications, as tools to calculate probabilities for outcomes of measurements, given the state preparations and transformations allowed by the theory. Finding a resolution to “Why the Quantum” then reduces to finding “reasonable” physical principles that allow one to single out quantum theory from the space of GPTs.

Galley and Masanes draw heavily upon group representation to show that all the alternatives compatible with the structure of the quantum state space and dynamics are in correspondence to a certain class of representations of the unitary group. This provides a full classification of all theories with alternative measurement postulates to the standard quantum ones. Quantum theory is then picked out as the unique theory that satisfies two hypotheses: no-restriction on measurements and pure-state bit symmetry.

Informally, ‘no-restriction’ postulates that all possible measurements on a state space are allowed by the theory. Bit symmetry is the requirement that any pair of distinguishable states can be mapped into any other pair of distinguishable states via an allowed transformation. While no restriction has a less direct operational meaning, bit-symmetry has an information-theoretic interpretation, and is related to the possibility of reversible computation.

The present work represents a significant technical contribution to the field of generalised probabilistic theories, and opens several questions, including the effect of including measurement update rules, composition of systems, and the information-processing capabilities of the classes of alternative theories introduced here.

This editorial is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.

]]>The standard postulates of quantum theory can be divided into two groups: the first one characterizes the structure and dynamics of pure states, while the second one specifies the structure of measurements and the corresponding probabilities. In this work we keep the first group of postulates and characterize all alternatives to the second group that give rise to finite-dimensional sets of mixed states. We prove a correspondence between all these alternatives and a class of representations of the unitary group. Some features of these probabilistic theories are identical to quantum theory, but there are important differences in others. For example, some theories have three perfectly distinguishable states in a two-dimensional Hilbert space. Others have exotic properties such as lack of bit symmetry, the violation of no simultaneous encoding (a property similar to information causality) and the existence of maximal measurements without phase groups. We also analyze which of these properties single out the Born rule.

Quantum 1, 15 (2017). https://doi.org/10.22331/q-2017-07-14-15

]]>The standard postulates of quantum theory can be divided into two groups: the first one characterizes the structure and dynamics of pure states, while the second one specifies the structure of measurements and the corresponding probabilities. In this work we keep the first group of postulates and characterize all alternatives to the second group that give rise to finite-dimensional sets of mixed states. We prove a correspondence between all these alternatives and a class of representations of the unitary group. Some features of these probabilistic theories are identical to quantum theory, but there are important differences in others. For example, some theories have three perfectly distinguishable states in a two-dimensional Hilbert space. Others have exotic properties such as lack of bit symmetry, the violation of no simultaneous encoding (a property similar to information causality) and the existence of maximal measurements without phase groups. We also analyze which of these properties single out the Born rule.

]]>In this work we present a security analysis for quantum key distribution, establishing a rigorous tradeoff between various protocol and security parameters for a class of entanglement-based and prepare-and-measure protocols. The goal of this paper is twofold: 1) to review and clarify the stateof-the-art security analysis based on entropic uncertainty relations, and 2) to provide an accessible resource for researchers interested in a security analysis of quantum cryptographic protocols that takes into account finite resource effects. For this purpose we collect and clarify several arguments spread in the literature on the subject with the goal of making this treatment largely self-contained. More precisely, we focus on a class of prepare-and-measure protocols based on the Bennett-Brassard (BB84) protocol as well as a class of entanglement-based protocols similar to the Bennett-Brassard-Mermin (BBM92) protocol. We carefully formalize the different steps in these protocols, including randomization, measurement, parameter estimation, error correction and privacy amplification, allowing us to be mathematically precise throughout the security analysis. We start from an operational definition of what it means for a quantum key distribution protocol to be secure and derive simple conditions that serve as sufficient condition for secrecy and correctness. We then derive and eventually discuss tradeoff relations between the block length of the classical computation, the noise tolerance, the secret key length and the security parameters for our protocols. Our results significantly improve upon previously reported tradeoffs.

Quantum 1, 14 (2017). https://doi.org/10.22331/q-2017-07-14-14

]]>In this work we present a security analysis for quantum key distribution, establishing a rigorous tradeoff between various protocol and security parameters for a class of entanglement-based and prepare-and-measure protocols. The goal of this paper is twofold: 1) to review and clarify the stateof-the-art security analysis based on entropic uncertainty relations, and 2) to provide an accessible resource for researchers interested in a security analysis of quantum cryptographic protocols that takes into account finite resource effects. For this purpose we collect and clarify several arguments spread in the literature on the subject with the goal of making this treatment largely self-contained. More precisely, we focus on a class of prepare-and-measure protocols based on the Bennett-Brassard (BB84) protocol as well as a class of entanglement-based protocols similar to the Bennett-Brassard-Mermin (BBM92) protocol. We carefully formalize the different steps in these protocols, including randomization, measurement, parameter estimation, error correction and privacy amplification, allowing us to be mathematically precise throughout the security analysis. We start from an operational definition of what it means for a quantum key distribution protocol to be secure and derive simple conditions that serve as sufficient condition for secrecy and correctness. We then derive and eventually discuss tradeoff relations between the block length of the classical computation, the noise tolerance, the secret key length and the security parameters for our protocols. Our results significantly improve upon previously reported tradeoffs.

]]>Macro-realism is the position that certain macroscopic observables must always possess definite values: e.g. the table is in some definite position, even if we do not know what that is precisely. The traditional understanding is that by assuming macro-realism one can derive the Leggett-Garg inequalities, which constrain the possible statistics from certain experiments. Since quantum experiments can violate the Leggett-Garg inequalities, this is taken to rule out the possibility of macro-realism in a quantum universe. However, recent analyses have exposed loopholes in the Leggett-Garg argument, which allow many types of macro-realism to be compatible with quantum theory and hence violation of the Leggett-Garg inequalities. This paper takes a different approach to ruling out macro-realism and the result is a no-go theorem for macro-realism in quantum theory that is stronger than the Leggett-Garg argument. This approach uses the framework of ontological models: an elegant way to reason about foundational issues in quantum theory which has successfully produced many other recent results, such as the PBR theorem.

Quantum 1, 13 (2017). https://doi.org/10.22331/q-2017-07-14-13

]]>Macro-realism is the position that certain macroscopic observables must always possess definite values: e.g. the table is in some definite position, even if we do not know what that is precisely. The traditional understanding is that by assuming macro-realism one can derive the Leggett-Garg inequalities, which constrain the possible statistics from certain experiments. Since quantum experiments can violate the Leggett-Garg inequalities, this is taken to rule out the possibility of macro-realism in a quantum universe. However, recent analyses have exposed loopholes in the Leggett-Garg argument, which allow many types of macro-realism to be compatible with quantum theory and hence violation of the Leggett-Garg inequalities. This paper takes a different approach to ruling out macro-realism and the result is a no-go theorem for macro-realism in quantum theory that is stronger than the Leggett-Garg argument. This approach uses the framework of ontological models: an elegant way to reason about foundational issues in quantum theory which has successfully produced many other recent results, such as the PBR theorem.

]]>First, a clarification: Quantum’s social media accounts are currently managed by the Executive Board, and are their exclusive responsibility. They do not represent the views of Quantum as a whole unless explicitly said otherwise. In particular, sharing of opinion articles does not imply endorsement.

Content shared by Quantum generally falls into the following categories:

**Papers**published in Quantum and followups (such as editorials, perspectives and further media coverage).

**News**related to Quantum (for example updates on policies, outreach events and media coverage).

- News about
**quantum science**that are of interest to the larger community, at our discretion.

Since we aim to be a international venue for quantum sciences, without any regional bias, for now we have decided not to advertise local workshops, initiatives and programs, no matter how personally supportive we may be of them. The only workshops and events mentioned are in the context of Quantum doing outreach there and generally only after the event has taken place.

- Analyses and opinion articles about
**life in academia**, at our discretion.

Examples of topics that we may touch are excessive pressure on academics, mental health in academia, endemic problems in the job market, harassement and systemic discrimination. Whenever possible, we will favour general analyses of a phenomenon rather than news of a particular instance.

- News, analyses and opinion articles about
**academic publishing and open science**, at our discretion (here we may refer to individual events).

Quantum may invite individuals to **write opinion articles** on any of the subjects described above. Those will be published on Quantum’s website (offically in the online journal Quantum Views under a CC BY 4.0 licence, such that copyright remains with the authors). We expect most of these articles to be editorial-like viewpoints (“perspectives”) about papers published in Quantum.

We will review these guidelines at the end of 2017.

]]>SciPost‘s Jean-Sébastien Caux and Quantum’s Christian Gogolin were both invited to speak about “the future of scientific publishing” at a dedicated session on that topic during YRM 2017 in Tarragona.

SciPost and Quantum, while independent endeavors, share many of their core values and have many common goals. Together with the participants of YRM, topics such as the opportunities and dangers of open-access publishing, the relevance of new technologies for publishing, and the influence of funding policies on publishing were discussed.

We thank the participants of YRM 2017 for the very interesting discussions and the organizers for the opportunity to participate!

]]>Quantum has been assigned an International Standard Serial Number (ISSN) by the ISSN International Centre.

The ISSN is similar to the more widely known ISBN, which is used to uniquely identify books. The ISSN achieves the same, but for periodically appearing publications, like journals. Quantum is thus from now on uniquely identified by its ISSN **2521-327X**.

Having obtained an ISSN enables us to proceed with developing Quantum further and will allow us to apply for inclusion in the Web of Science and the Directory of Open Access Journals.

Papers already published in Quantum (and the respective .bib files) have been updated to include the ISSN.

]]>We investigate an approach to universal quantum computation based on the modulation of longitudinal qubit-oscillator coupling. We show how to realize a controlled-phase gate by simultaneously modulating the longitudinal coupling of two qubits to a common oscillator mode. In contrast to the more familiar transversal qubit-oscillator coupling, the magnitude of the effective qubit-qubit interaction does not rely on a small perturbative parameter. As a result, this effective interaction strength can be made large, leading to short gate times and high gate fidelities. We moreover show how the gate infidelity can be exponentially suppressed with squeezing and how the entangling gate can be generalized to qubits coupled to separate oscillators. Our proposal can be realized in multiple physical platforms for quantum computing, including superconducting and spin qubits.

Quantum 1, 11 (2017). https://doi.org/10.22331/q-2017-05-11-11

]]>We investigate an approach to universal quantum computation based on the modulation of longitudinal qubit-oscillator coupling. We show how to realize a controlled-phase gate by simultaneously modulating the longitudinal coupling of two qubits to a common oscillator mode. In contrast to the more familiar transversal qubit-oscillator coupling, the magnitude of the effective qubit-qubit interaction does not rely on a small perturbative parameter. As a result, this effective interaction strength can be made large, leading to short gate times and high gate fidelities. We moreover show how the gate infidelity can be exponentially suppressed with squeezing and how the entangling gate can be generalized to qubits coupled to separate oscillators. Our proposal can be realized in multiple physical platforms for quantum computing, including superconducting and spin qubits.

]]>*This is a Perspective on "Achieving quantum supremacy with sparse and noisy commuting quantum computations" by Michael J. Bremner, Ashley Montanaro, and Dan J. Shepherd, published in Quantum 1, 8 (2017).*

Quantum Views 1, 1 (2017).

https://doi.org/10.22331/qv-2017-04-25-1

**By Bill Fefferman, University of Maryland/NIST.**

One of the most active areas of research in quantum information is “quantum supremacy”. The central goal is to exhibit a provable quantum speedup over classical computation using the restrictive resources of existing or near-term quantum experiments. Starting with work of Bremner, Jozsa, and Shepherd it has been established that even non-universal commuting classes of quantum computations known as “Instantaneous Quantum Polynomial-time” (or IQP) circuits are capable of sampling from distributions that cannot be exactly sampled classically, under mild complexity assumptions.

As quantum supremacy leaps from the domain of theory to experiment, the major open questions involve understanding which intermediate models are capable of demonstrating supremacy, and how to account for experimentally realistic noise in these models. As a starting point, a primary objective has been to strengthen these sampling separations to hold even if the classical sampler is not required to sample exactly from the quantum outcome distribution but instead samples from a distribution close in total variation distance.

Follow-up work of Bremner, Montanaro and Shepherd addressed this stronger scenario. They show that the outcome distribution of IQP circuits cannot be approximately sampled classically, assuming a conjecture concerning the hardness of estimating the complex-temperature partition function of certain random instances of the Ising model. Understanding the feasibility and implications of this conjecture, as well as related conjectures of Aaronson and Arkhipov regarding estimating the permanent of random matrices, has since become one of the most important challenges in quantum complexity theory.

While not settling these conjectures, the present paper makes important contributions to our understanding of approximate sampling hardness results. The first main result extends the prior work to give similar conjectural evidence that approximate sampling from the output distribution of so-called “sparse” IQP circuits is classically hard. A randomly chosen “sparse” IQP circuit on n qubits consists of $O(n \log(n) )$ 2-qubit gates and can be implemented on a square lattice with a shallow depth circuit, with high probability. These structural properties reduce the experimental barrier of implementing such supremacy results and bring the proposal closer to the realm of currently implementable experiments.

The second result studies IQP sampling under a natural noise model, in which independent depolarizing noise is applied to every qubit at the end of the circuit. It is proven that the resulting output distribution becomes easy to sample from classically, illuminating the fragile nature of supremacy results. However, the authors develop new means for protecting against this depolarizing noise model, using ideas from classical error-correcting codes. Crucially, this result is achieved without the full overhead required by traditional quantum fault tolerance. It is very likely that these error-correcting tools will find use in future results on noise-tolerant quantum supremacy proposals.

Taken as a whole, these results give a fresh perspective to some of the most foundational questions in quantum computation—where quantum speedups come from, and under which experimentally motivated settings we can hope to observe these speedups. As such, this work offers a solid contribution to the quantum supremacy literature, and develops useful tools for analyzing candidates for future supremacy experiments.

This perspective is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.

]]>To study which are the most general causal structures which are compatible with local quantum mechanics, Oreshkov et al. introduced the notion of a process: a resource shared between some parties that allows for quantum communication between them without a predetermined causal order. These processes can be used to perform several tasks that are impossible in standard quantum mechanics: they allow for the violation of causal inequalities, and provide an advantage for computational and communication complexity. Nonetheless, no process that can be used to violate a causal inequality is known to be physically implementable. There is therefore considerable interest in determining which processes are physical and which are just mathematical artefacts of the framework. Here we make the first step in this direction, by proposing a purification postulate: processes are physical only if they are purifiable. We derive necessary conditions for a process to be purifiable, and show that several known processes do not satisfy them.

Quantum 1, 10 (2017). https://doi.org/10.22331/q-2017-04-26-10

]]>To study which are the most general causal structures which are compatible with local quantum mechanics, Oreshkov et al. introduced the notion of a process: a resource shared between some parties that allows for quantum communication between them without a predetermined causal order. These processes can be used to perform several tasks that are impossible in standard quantum mechanics: they allow for the violation of causal inequalities, and provide an advantage for computational and communication complexity. Nonetheless, no process that can be used to violate a causal inequality is known to be physically implementable. There is therefore considerable interest in determining which processes are physical and which are just mathematical artefacts of the framework. Here we make the first step in this direction, by proposing a purification postulate: processes are physical only if they are purifiable. We derive necessary conditions for a process to be purifiable, and show that several known processes do not satisfy them.

]]>We investigate the finite-density phase diagram of a non-abelian $SU(2)$ lattice gauge theory in $(1+1)$-dimensions using tensor network methods. We numerically characterise the phase diagram as a function of the matter filling and of the matter-field coupling, identifying different phases, some of them appearing only at finite densities. For weak matter-field coupling we find a meson BCS liquid phase, which is confirmed by second-order analytical perturbation theory. At unit filling and for strong coupling, the system undergoes a phase transition to a charge density wave of single-site (spin-0) mesons via spontaneous chiral symmetry breaking. At finite densities, the chiral symmetry is restored almost everywhere, and the meson BCS liquid becomes a simple liquid at strong couplings, with the exception of filling two-thirds, where a charge density wave of mesons spreading over neighbouring sites appears. Finally, we identify two tri-critical points between the chiral and the two liquid phases which are compatible with a $SU(2)_2$ Wess-Zumino-Novikov-Witten model. Here we do not perform the continuum limit but we explicitly address the global $U(1)$ charge conservation symmetry.

Quantum 1, 9 (2017). https://doi.org/10.22331/q-2017-04-25-9

]]>We investigate the finite-density phase diagram of a non-abelian $SU(2)$ lattice gauge theory in $(1+1)$-dimensions using tensor network methods. We numerically characterise the phase diagram as a function of the matter filling and of the matter-field coupling, identifying different phases, some of them appearing only at finite densities. For weak matter-field coupling we find a meson BCS liquid phase, which is confirmed by second-order analytical perturbation theory. At unit filling and for strong coupling, the system undergoes a phase transition to a charge density wave of single-site (spin-0) mesons via spontaneous chiral symmetry breaking. At finite densities, the chiral symmetry is restored almost everywhere, and the meson BCS liquid becomes a simple liquid at strong couplings, with the exception of filling two-thirds, where a charge density wave of mesons spreading over neighbouring sites appears. Finally, we identify two tri-critical points between the chiral and the two liquid phases which are compatible with a $SU(2)_2$ Wess-Zumino-Novikov-Witten model. Here we do not perform the continuum limit but we explicitly address the global $U(1)$ charge conservation symmetry.

]]>The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.

Quantum 1, 8 (2017). https://doi.org/10.22331/q-2017-04-25-8

]]>The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.

]]>We give a complete proposal showing how to detect the non-classical nature of photonic states with naked eyes as detectors. The enabling technology is a sub-Poissonian photonic state that is obtained from single photons, displacement operations in phase space and basic non-photon-number-resolving detectors. We present a detailed statistical analysis of our proposal including imperfect photon creation and detection and a realistic model of the human eye. We conclude that a few tens of hours are sufficient to certify non-classical light with the human eye with a p-value of 10%.

Quantum 1, 7 (2017). https://doi.org/10.22331/q-2017-04-25-7

]]>We give a complete proposal showing how to detect the non-classical nature of photonic states with naked eyes as detectors. The enabling technology is a sub-Poissonian photonic state that is obtained from single photons, displacement operations in phase space and basic non-photon-number-resolving detectors. We present a detailed statistical analysis of our proposal including imperfect photon creation and detection and a realistic model of the human eye. We conclude that a few tens of hours are sufficient to certify non-classical light with the human eye with a p-value of 10%.

]]>We apply classical algorithms for approximately solving constraint satisfaction problems to find bounds on extremal eigenvalues of local Hamiltonians. We consider spin Hamiltonians for which we have an upper bound on the number of terms in which each spin participates, and find extensive bounds for the operator norm and ground-state energy of such Hamiltonians under this constraint. In each case the bound is achieved by a product state which can be found efficiently using a classical algorithm.

Quantum 1, 6 (2017). https://doi.org/10.22331/q-2017-04-25-6

]]>We apply classical algorithms for approximately solving constraint satisfaction problems to find bounds on extremal eigenvalues of local Hamiltonians. We consider spin Hamiltonians for which we have an upper bound on the number of terms in which each spin participates, and find extensive bounds for the operator norm and ground-state energy of such Hamiltonians under this constraint. In each case the bound is achieved by a product state which can be found efficiently using a classical algorithm.

]]>Characterizing quantum systems through experimental data is critical to applications as diverse as metrology and quantum computing. Analyzing this experimental data in a robust and reproducible manner is made challenging, however, by the lack of readily-available software for performing principled statistical analysis. We improve the robustness and reproducibility of characterization by introducing an open-source library, QInfer, to address this need. Our library makes it easy to analyze data from tomography, randomized benchmarking, and Hamiltonian learning experiments either in post-processing, or online as data is acquired. QInfer also provides functionality for predicting the performance of proposed experimental protocols from simulated runs. By delivering easy-to-use characterization tools based on principled statistical analysis, QInfer helps address many outstanding challenges facing quantum technology.

Quantum 1, 5 (2017). https://doi.org/10.22331/q-2017-04-25-5

]]>Characterizing quantum systems through experimental data is critical to applications as diverse as metrology and quantum computing. Analyzing this experimental data in a robust and reproducible manner is made challenging, however, by the lack of readily-available software for performing principled statistical analysis. We improve the robustness and reproducibility of characterization by introducing an open-source library, QInfer, to address this need. Our library makes it easy to analyze data from tomography, randomized benchmarking, and Hamiltonian learning experiments either in post-processing, or online as data is acquired. QInfer also provides functionality for predicting the performance of proposed experimental protocols from simulated runs. By delivering easy-to-use characterization tools based on principled statistical analysis, QInfer helps address many outstanding challenges facing quantum technology.

]]>We study the fundamental limits on the reliable storage of quantum information in lattices of qubits by deriving tradeoff bounds for approximate quantum error correcting codes. We introduce a notion of local approximate correctability and code distance, and give a number of equivalent formulations thereof, generalizing various exact error-correction criteria. Our tradeoff bounds relate the number of physical qubits $n$, the number of encoded qubits $k$, the code distance $d$, the accuracy parameter $\delta$ that quantifies how well the erasure channel can be reversed, and the locality parameter $\ell$ that specifies the length scale at which the recovery operation can be done. In a regime where the recovery is successful to accuracy $\epsilon$ that is exponentially small in $\ell$, which is the case for perturbations of local commuting projector codes, our bound reads $kd^{\frac{2}{D-1}} \le O\bigl(n (\log n)^{\frac{2D}{D-1}} \bigr)$ for codes on $D$-dimensional lattices of Euclidean metric. We also find that the code distance of any local approximate code cannot exceed $O\bigl(\ell n^{(D-1)/D}\bigr)$ if $\delta \le O(\ell n^{-1/D})$. As a corollary of our formulation of correctability in terms of logical operator avoidance, we show that the code distance $d$ and the size $\tilde d$ of a minimal region that can support all approximate logical operators satisfies $\tilde d d^{\frac{1}{D-1}}\le O\bigl( n \ell^{\frac{D}{D-1}} \bigr)$, where the logical operators are accurate up to $O\bigl( ( n \delta / d )^{1/2}\bigr)$ in operator norm. Finally, we prove that for two-dimensional systems if logical operators can be approximated by operators supported on constant-width flexible strings, then the dimension of the code space must be bounded. This supports one of the assumptions of algebraic anyon theories, that there exist only finitely many anyon types.

Quantum 1, 4 (2017). https://doi.org/10.22331/q-2017-04-25-4

]]>We study the fundamental limits on the reliable storage of quantum information in lattices of qubits by deriving tradeoff bounds for approximate quantum error correcting codes. We introduce a notion of local approximate correctability and code distance, and give a number of equivalent formulations thereof, generalizing various exact error-correction criteria. Our tradeoff bounds relate the number of physical qubits $n$, the number of encoded qubits $k$, the code distance $d$, the accuracy parameter $\delta$ that quantifies how well the erasure channel can be reversed, and the locality parameter $\ell$ that specifies the length scale at which the recovery operation can be done. In a regime where the recovery is successful to accuracy $\epsilon$ that is exponentially small in $\ell$, which is the case for perturbations of local commuting projector codes, our bound reads $kd^{\frac{2}{D-1}} \le O\bigl(n (\log n)^{\frac{2D}{D-1}} \bigr)$ for codes on $D$-dimensional lattices of Euclidean metric. We also find that the code distance of any local approximate code cannot exceed $O\bigl(\ell n^{(D-1)/D}\bigr)$ if $\delta \le O(\ell n^{-1/D})$. As a corollary of our formulation of correctability in terms of logical operator avoidance, we show that the code distance $d$ and the size $\tilde d$ of a minimal region that can support all approximate logical operators satisfies $\tilde d d^{\frac{1}{D-1}}\le O\bigl( n \ell^{\frac{D}{D-1}} \bigr)$, where the logical operators are accurate up to $O\bigl( ( n \delta / d )^{1/2}\bigr)$ in operator norm. Finally, we prove that for two-dimensional systems if logical operators can be approximated by operators supported on constant-width flexible strings, then the dimension of the code space must be bounded. This supports one of the assumptions of algebraic anyon theories, that there exist only finitely many anyon types.

]]>We consider the problem of reproducing the correlations obtained by arbitrary local projective measurements on the two-qubit Werner state $\rho = v |\psi_- \rangle \langle\psi_- | + (1- v ) \frac{1}{4}$ via a local hidden variable (LHV) model, where $|\psi_- \rangle$ denotes the singlet state. We show analytically that these correlations are local for $ v = 999\times689\times{10^{-6}}$ $\cos^2(\pi/50) \simeq 0.6829$. In turn, as this problem is closely related to a purely mathematical one formulated by Grothendieck, our result implies a new bound on the Grothendieck constant $K_G(3) \leq 1/v \simeq 1.4644$. We also present a LHV model for reproducing the statistics of arbitrary POVMs on the Werner state for $v \simeq 0.4553$. The techniques we develop can be adapted to construct LHV models for other entangled states, as well as bounding other Grothendieck constants.

Quantum 1, 3 (2017). https://doi.org/10.22331/q-2017-04-25-3

]]>We consider the problem of reproducing the correlations obtained by arbitrary local projective measurements on the two-qubit Werner state $\rho = v |\psi_- \rangle \langle\psi_- | + (1- v ) \frac{1}{4}$ via a local hidden variable (LHV) model, where $|\psi_- \rangle$ denotes the singlet state. We show analytically that these correlations are local for $ v = 999\times689\times{10^{-6}}$ $\cos^2(\pi/50) \simeq 0.6829$. In turn, as this problem is closely related to a purely mathematical one formulated by Grothendieck, our result implies a new bound on the Grothendieck constant $K_G(3) \leq 1/v \simeq 1.4644$. We also present a LHV model for reproducing the statistics of arbitrary POVMs on the Werner state for $v \simeq 0.4553$. The techniques we develop can be adapted to construct LHV models for other entangled states, as well as bounding other Grothendieck constants.

]]>The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.

Quantum 1, 2 (2017). https://doi.org/10.22331/q-2017-04-25-2

]]>The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.

]]>Self-testing allows classical referees to verify the quantum behaviour of some untrusted devices. Recently we developed a framework for building large self-tests by repeating a smaller self-test many times in parallel. However, the framework did not apply to the CHSH test, which tests a maximally entangled pair of qubits. CHSH is the most well known and widely used test of this type. Here we extend the parallel self-testing framework to build parallel CHSH self-tests for any number of pairs of maximally entangled qubits. Our construction achieves an error bound which is polynomial in the number of tested qubit pairs.

Quantum 1, 1 (2017). https://doi.org/10.22331/q-2017-04-25-1

]]>Self-testing allows classical referees to verify the quantum behaviour of some untrusted devices. Recently we developed a framework for building large self-tests by repeating a smaller self-test many times in parallel. However, the framework did not apply to the CHSH test, which tests a maximally entangled pair of qubits. CHSH is the most well known and widely used test of this type. Here we extend the parallel self-testing framework to build parallel CHSH self-tests for any number of pairs of maximally entangled qubits. Our construction achieves an error bound which is polynomial in the number of tested qubit pairs.

]]>Before our launch issue finally arrives we have further exciting news to share: Quantum is cooperating with Fermat’s Library, a project enabling collaborative online annotation of scientific papers. Papers published in Quantum will be given the entirely voluntary option to opt-in to this extra free service.

Fermat’s Library is a platform for illuminating academic papers and share knowledge. Our goal is to make papers more open and accessible and to foster discussions around their content. Fermat’s Library was born in 2015 and has since been completely free and open.

While authors write with the best intention of conveying information in the didactically most optimal way, the scientific community is diverse and scientists with different backgrounds can greatly benefit from annotated in depth explanations. In other occasions important implications of seemingly innocuous theorems and lemmas only manifest themselves after publication, rendering literature search tedious as the retrospectively central result seems to be hidden or only implicitly stated. These issues and many more can be remedied by the scientific community collectively scribbling in the margins of important papers (or nowadays implementing hyperlinked TeX comments in the online pdf).

Hence, for all papers published in Quantum there will be the option for authors to enable fellow scientist to scribble into the margins of their works on Fermat’s Library.

]]>Quantum has received over 40 submissions since the launch in mid November! We would like to take this as an opportunity to thank all authors who have shown their support for and trust in the journal by submitting their works to us! We would also like to thank our tireless team of editors, and all the referees who contributed with their time and expertise to access (and, crucially, raise) the quality of papers submitted to Quantum.

Behind the scenes our editors and referees are working hard to evaluate the submitted works and help authors improve their manuscripts. Several high quality manuscripts are now in the final stages of the peer-review process and we expect them to be accepted and then published in Quantum within the next few weeks.

In the meantime, we can give you a snippet into the editorial process at Quantum, with preliminary analytics, courtesy of Scholastica.

*Here is our submission record (with includes resubmissions after a round of review).*

The stats are naturally skewed towards desk rejection at this stage, because…

*It takes a much shorter amount of time to desk reject a paper than to reject after reviews (when we have to wait until two referee reports are submitted), and that is in turn faster than acceptance (which usually takes a couple of review rounds to implement referee suggestions).*

In the next few months, as acceptances catch up, we expect to see the acceptance rate increase. At the moment, it looks like roughly half of the submissions being processed are likely to be accepted.

Stay tuned for updates!

]]>How can we ensure that Quantum is financially sustainable? And how are publication fees going to work exactly? In this post we go into the gritty details of Quantum’s finances and how we plan its long term viability.

Let us set the scene with the following facts:

**Quantum is non-profit**. By its legal form it can not make sustained profits and it can spend money only on a narrowly defined set of activities related to running the journal.- Upon acceptance, Quantum offers the possibility to get the
**publication fee completely waived**. This offer extends to everyone and is not bound to any conditions.

Let us next explain what are the costs of running the journal:

- 10$ per submitted article for the platform of Scholastica,
- A few dollars per published article and a few hundred euros of annual fees for assigning DOIs through Crossref,
- Running the website and other smaller costs adding up to a few hundred euros per year,
- One-off costs for legal advice and trade-mark registration of the order of several thousand euros.

Finally, let us explain the axioms of the finances of Quantum:

**No good research should remain unpublished**for a lack of funds.- Financial issues should
**never impact the editorial decisions**of Quantum.

What are the dangers?

- Since we pay per submission but get paid per accepted publication, in times of financial scarcity, there would be an
**incentive to accept more**works. - To not compromise its
**independence**, Quantum should never be too dependent on just one source of funding. - It is currently very hard to predict
**how many submissions**Quantum will receive and, because Quantum does not have a target rejection rate, it is even less clear how many articles it will publish. This can negatively impact finances in two ways: If Quantum receives very few submissions, the**annual costs**can become significant (this is currently very unlikely). If Quantum receives more submissions than can ultimately be handled by volunteers alone, Quantum might have to**employ someone**for a couple of hours a week to technically check papers before publication (DOI links, …), to put them on the website, and deposit the meta-data with Crossref (this seems possible in the medium term future).

How can the axioms be achieved given these dangers?

- Quantum will rely on
**multiple sources of funding**:- Publication fees (more on that below)
- Support from research institutes, physical societies, and other funding bodies. We have already received 2000€ from IQOQI to pay for the first round of submissions. Further pledges based on the achievements of certain goals have been made.
- Private donations. These are typically small amounts, but they add up.

**Radical transparency**. Quantum practices**public accounting**. This means that all its earnings and expenditures are made public through a shared spreadsheet for everyone to see and check. Private donations are anonymized.**Regular revisions**of the publication fee, justified on the basis of the openly available financial data, to adapt to changes in the financial situation.**Flexibility**in the publication fees. Since our fees are regular publication fees, as in any other open-access journal, many research groups have funding available to cover them. We offer**two fees**, a regular 200€ and a discounted 100€ fee to enable well funded groups to support us. Groups with no funding can make use of the offered waiver.

Given the relatively low costs and the support by strong partners, we are very confident that Quantum will be able to offer the same or even lower publication fees in the future and that it will never let financial issues influence editorial decisions.

If you think that your institution would be willing to make a small yearly contribution to our mission, please contact us under info@quantum-journal.org. Even a small contribution of 1000€ per year would help Quantum enormously, while being a very small amount compared to usual open-access publishing fees.

]]>Today is launch day! Quantum opens for submissions!

To submit, please go to the for authors page, read carefully, agree to Quantum’s terms and conditions, and hit the submit button. This takes you to Scholastica, where you just have to enter the arXiv identifier of your work to import the meta data and manuscript from the arXiv (after creating an account) and that’s it! You can also suggest referees and name people to avoid. No letter to the editor is required; Quantum expects the manuscript to speak for itself.

We would like to take the opportunity to say thank you to all the people who have directly contributed to this project, who have supported us long the way, gave us advice, discussed with us, voiced their doubts, offered help, made generous donations, and expressed their enthusiasm. Thank you Quantum community!

]]>