The semi-device-independent approach provides a framework for prepare-and-measure quantum protocols using devices whose behavior must not be characterized nor trusted, except for a single assumption on the dimension of the Hilbert space characterizing the quantum carriers. Here, we propose instead to constrain the quantum carriers through a bound on the mean value of a well-chosen observable. This modified assumption is physically better motivated than a dimension bound and closer to the description of actual experiments. In particular, we consider quantum optical schemes where the source emits quantum states described in an infinite-dimensional Fock space and model our assumption as an upper bound on the average photon number in the emitted states. We characterize the set of correlations that may be exhibited in the simplest possible scenario compatible with our new framework, based on two energy-constrained state preparations and a two-outcome measurement. Interestingly, we uncover the existence of quantum correlations exceeding the set of classical correlations that can be produced by devices behaving in a purely pre-determined fashion (possibly including shared randomness). This feature suggests immediate applications to certified randomness generation. Along this line, we analyze the achievable correlations in several prepare-and-measure optical schemes with a mean photon number constraint and demonstrate that they allow for the generation of certified randomness. Our simplest optical scheme works by the on-off keying of an attenuated laser source followed by photocounting. It opens the path to more sophisticated energy-constrained semi-device-independent quantum cryptography protocols, such as quantum key distribution.

Quantum 1, 33 (2017). https://doi.org/10.22331/q-2017-11-18-33

]]>The semi-device-independent approach provides a framework for prepare-and-measure quantum protocols using devices whose behavior must not be characterized nor trusted, except for a single assumption on the dimension of the Hilbert space characterizing the quantum carriers. Here, we propose instead to constrain the quantum carriers through a bound on the mean value of a well-chosen observable. This modified assumption is physically better motivated than a dimension bound and closer to the description of actual experiments. In particular, we consider quantum optical schemes where the source emits quantum states described in an infinite-dimensional Fock space and model our assumption as an upper bound on the average photon number in the emitted states. We characterize the set of correlations that may be exhibited in the simplest possible scenario compatible with our new framework, based on two energy-constrained state preparations and a two-outcome measurement. Interestingly, we uncover the existence of quantum correlations exceeding the set of classical correlations that can be produced by devices behaving in a purely pre-determined fashion (possibly including shared randomness). This feature suggests immediate applications to certified randomness generation. Along this line, we analyze the achievable correlations in several prepare-and-measure optical schemes with a mean photon number constraint and demonstrate that they allow for the generation of certified randomness. Our simplest optical scheme works by the on-off keying of an attenuated laser source followed by photocounting. It opens the path to more sophisticated energy-constrained semi-device-independent quantum cryptography protocols, such as quantum key distribution.

]]>Who is citing papers published in Quantum? This is now easier to find than ever, since Quantum has started displaying Cited-by information on the pages of its articles. You can try it out for example for this paper (scroll to the bottom).

Following its mission to make publishing more open and transparent, Quantum is also making its own citation meta-data available to all other publishers participating in the Initiative for Open Citations (I4OC). Thereby, Quantum is contributing to a database of citation meta-data that can be accessed by all participants in a fair way.

We have also applied for Quantum to be included in the Directory of Open Access Journals, the Web of Science, and Scopus. The evaluation in all three cases is ongoing and can take up to 12 months. Google Scholar is also already crawling and indexing the content published by Quantum. We will keep you posted about updates!

]]>Thank you CQT for supporting open scientific publishing!

Quantum practices open accounting, which you may consult at any time.

]]>

We develop a general framework to investigate fluctuations of non-commuting observables. To this end, we consider the Keldysh quasi-probability distribution (KQPD). This distribution provides a measurement-independent description of the observables of interest and their time-evolution. Nevertheless, positive probability distributions for measurement outcomes can be obtained from the KQPD by taking into account the effect of measurement back-action and imprecision. Negativity in the KQPD can be linked to an interference effect and acts as an indicator for non-classical behavior. Notable examples of the KQPD are the Wigner function and the full counting statistics, both of which have been used extensively to describe systems in the absence as well as in the presence of a measurement apparatus. Here we discuss the KQPD and its moments in detail and connect it to various time-dependent problems including weak values, fluctuating work, and Leggett-Garg inequalities. Our results are illustrated using the simple example of two subsequent, non-commuting spin measurements.

Quantum 1, 32 (2017). https://doi.org/10.22331/q-2017-10-12-32

]]>We develop a general framework to investigate fluctuations of non-commuting observables. To this end, we consider the Keldysh quasi-probability distribution (KQPD). This distribution provides a measurement-independent description of the observables of interest and their time-evolution. Nevertheless, positive probability distributions for measurement outcomes can be obtained from the KQPD by taking into account the effect of measurement back-action and imprecision. Negativity in the KQPD can be linked to an interference effect and acts as an indicator for non-classical behavior. Notable examples of the KQPD are the Wigner function and the full counting statistics, both of which have been used extensively to describe systems in the absence as well as in the presence of a measurement apparatus. Here we discuss the KQPD and its moments in detail and connect it to various time-dependent problems including weak values, fluctuating work, and Leggett-Garg inequalities. Our results are illustrated using the simple example of two subsequent, non-commuting spin measurements.

]]>Quantum continues to grow and is hiring a management assistant. For now, this is a part-time position for six months based in Vienna.

To learn more have a look at the job announcement and application page.

]]>We present an infinite family of protocols to distill magic states for $T$-gates that has a low space overhead and uses an asymptotic number of input magic states to achieve a given target error that is conjectured to be optimal. The space overhead, defined as the ratio between the physical qubits to the number of output magic states, is asymptotically constant, while both the number of input magic states used per output state and the $T$-gate depth of the circuit scale linearly in the logarithm of the target error $\delta$ (up to $\log \log 1/\delta$). Unlike other distillation protocols, this protocol achieves this performance without concatenation and the input magic states are injected at various steps in the circuit rather than all at the start of the circuit. The protocol can be modified to distill magic states for other gates at the third level of the Clifford hierarchy, with the same asymptotic performance. The protocol relies on the construction of weakly self-dual CSS codes with many logical qubits and large distance, allowing us to implement control-SWAPs on multiple qubits. We call this code the "inner code". The control-SWAPs are then used to measure properties of the magic state and detect errors, using another code that we call the "outer code". Alternatively, we use weakly-self dual CSS codes which implement controlled Hadamards for the inner code, reducing circuit depth. We present several specific small examples of this protocol.

Quantum 1, 31 (2017). https://doi.org/10.22331/q-2017-10-03-31

]]>We present an infinite family of protocols to distill magic states for $T$-gates that has a low space overhead and uses an asymptotic number of input magic states to achieve a given target error that is conjectured to be optimal. The space overhead, defined as the ratio between the physical qubits to the number of output magic states, is asymptotically constant, while both the number of input magic states used per output state and the $T$-gate depth of the circuit scale linearly in the logarithm of the target error $\delta$ (up to $\log \log 1/\delta$). Unlike other distillation protocols, this protocol achieves this performance without concatenation and the input magic states are injected at various steps in the circuit rather than all at the start of the circuit. The protocol can be modified to distill magic states for other gates at the third level of the Clifford hierarchy, with the same asymptotic performance. The protocol relies on the construction of weakly self-dual CSS codes with many logical qubits and large distance, allowing us to implement control-SWAPs on multiple qubits. We call this code the "inner code". The control-SWAPs are then used to measure properties of the magic state and detect errors, using another code that we call the "outer code". Alternatively, we use weakly-self dual CSS codes which implement controlled Hadamards for the inner code, reducing circuit depth. We present several specific small examples of this protocol.

]]>We study the dynamics of a quantum impurity immersed in a Bose-Einstein condensate as an open quantum system in the framework of the quantum Brownian motion model. We derive a generalized Langevin equation for the position of the impurity. The Langevin equation is an integrodifferential equation that contains a memory kernel and is driven by a colored noise. These result from considering the environment as given by the degrees of freedom of the quantum gas, and thus depend on its parameters, e.g. interaction strength between the bosons, temperature, etc. We study the role of the memory on the dynamics of the impurity. When the impurity is untrapped, we find that it exhibits a super-diffusive behavior at long times. We find that back-flow in energy between the environment and the impurity occurs during evolution. When the particle is trapped, we calculate the variance of the position and momentum to determine how they compare with the Heisenberg limit. One important result of this paper is that we find position squeezing for the trapped impurity at long times. We determine the regime of validity of our model and the parameters in which these effects can be observed in realistic experiments.

Quantum 1, 30 (2017). https://doi.org/10.22331/q-2017-09-27-30

]]>We study the dynamics of a quantum impurity immersed in a Bose-Einstein condensate as an open quantum system in the framework of the quantum Brownian motion model. We derive a generalized Langevin equation for the position of the impurity. The Langevin equation is an integrodifferential equation that contains a memory kernel and is driven by a colored noise. These result from considering the environment as given by the degrees of freedom of the quantum gas, and thus depend on its parameters, e.g. interaction strength between the bosons, temperature, etc. We study the role of the memory on the dynamics of the impurity. When the impurity is untrapped, we find that it exhibits a super-diffusive behavior at long times. We find that back-flow in energy between the environment and the impurity occurs during evolution. When the particle is trapped, we calculate the variance of the position and momentum to determine how they compare with the Heisenberg limit. One important result of this paper is that we find position squeezing for the trapped impurity at long times. We determine the regime of validity of our model and the parameters in which these effects can be observed in realistic experiments.

]]>Non-Markovian stochastic Schrödinger equations (NMSSE) are important tools in quantum mechanics, from the theory of open systems to foundations. Yet, in general, they are but formal objects: their solution can be computed numerically only in some specific cases or perturbatively. This article is focused on the NMSSE themselves rather than on the open-system evolution they unravel and aims at making them less abstract. Namely, we propose to write the stochastic realizations of linear NMSSE as averages over the solutions of an auxiliary equation with an additional random field. Our method yields a non-perturbative numerical simulation algorithm for generic linear NMSSE that can be made arbitrarily accurate for reasonably short times. For isotropic complex noises, the method extends from linear to non-linear NMSSE and allows to sample the solutions of norm-preserving NMSSE directly.

Quantum 1, 29 (2017). https://doi.org/10.22331/q-2017-09-19-29

]]>Non-Markovian stochastic Schrödinger equations (NMSSE) are important tools in quantum mechanics, from the theory of open systems to foundations. Yet, in general, they are but formal objects: their solution can be computed numerically only in some specific cases or perturbatively. This article is focused on the NMSSE themselves rather than on the open-system evolution they unravel and aims at making them less abstract. Namely, we propose to write the stochastic realizations of linear NMSSE as averages over the solutions of an auxiliary equation with an additional random field. Our method yields a non-perturbative numerical simulation algorithm for generic linear NMSSE that can be made arbitrarily accurate for reasonably short times. For isotropic complex noises, the method extends from linear to non-linear NMSSE and allows to sample the solutions of norm-preserving NMSSE directly.

]]>The qubit depolarizing channel with noise parameter $\eta$ transmits an input qubit perfectly with probability $1-\eta$, and outputs the completely mixed state with probability $\eta$. We show that its complementary channel has positive quantum capacity for all $\eta\gt 0$. Thus, we find that there exists a single parameter family of channels having the peculiar property of having positive quantum capacity even when the outputs of these channels approach a fixed state independent of the input. Comparisons with other related channels, and implications on the difficulty of studying the quantum capacity of the depolarizing channel are discussed.

Quantum 1, 28 (2017). https://doi.org/10.22331/q-2017-09-19-28

]]>The qubit depolarizing channel with noise parameter $\eta$ transmits an input qubit perfectly with probability $1-\eta$, and outputs the completely mixed state with probability $\eta$. We show that its complementary channel has positive quantum capacity for all $\eta\gt 0$. Thus, we find that there exists a single parameter family of channels having the peculiar property of having positive quantum capacity even when the outputs of these channels approach a fixed state independent of the input. Comparisons with other related channels, and implications on the difficulty of studying the quantum capacity of the depolarizing channel are discussed.

]]>For improved security and privacy quantum-journal.org is now available under https://quantum-journal.org/ in ssl encrypted form with a trusted certificate. In this way, you can be sure that all information you see on quantum-journal.org actually comes from that website and that all form data you submit, such as searches or appeals, are uploaded in a secure fashion. ]]>

We establish general limits on how precise a parameter, e.g. frequency or the strength of a magnetic field, can be estimated with the aid of full and fast quantum control. We consider uncorrelated noisy evolutions of N qubits and show that fast control allows to fully restore the Heisenberg scaling (~1/N^2) for all rank-one Pauli noise except dephasing. For all other types of noise the asymptotic quantum enhancement is unavoidably limited to a constant-factor improvement over the standard quantum limit (~1/N) even when allowing for the full power of fast control. The latter holds both in the single-shot and infinitely-many repetitions scenarios. However, even in this case allowing for fast quantum control helps to increase the improvement factor. Furthermore, for frequency estimation with finite resource we show how a parallel scheme utilizing any fixed number of entangled qubits but no fast quantum control can be outperformed by a simple, easily implementable, sequential scheme which only requires entanglement between one sensing and one auxiliary qubit.

Quantum 1, 27 (2017). https://doi.org/10.22331/q-2017-09-06-27

]]>We establish general limits on how precise a parameter, e.g. frequency or the strength of a magnetic field, can be estimated with the aid of full and fast quantum control. We consider uncorrelated noisy evolutions of N qubits and show that fast control allows to fully restore the Heisenberg scaling (~1/N^2) for all rank-one Pauli noise except dephasing. For all other types of noise the asymptotic quantum enhancement is unavoidably limited to a constant-factor improvement over the standard quantum limit (~1/N) even when allowing for the full power of fast control. The latter holds both in the single-shot and infinitely-many repetitions scenarios. However, even in this case allowing for fast quantum control helps to increase the improvement factor. Furthermore, for frequency estimation with finite resource we show how a parallel scheme utilizing any fixed number of entangled qubits but no fast quantum control can be outperformed by a simple, easily implementable, sequential scheme which only requires entanglement between one sensing and one auxiliary qubit.

]]>We give a new upper bound on the quantum query complexity of deciding $st$-connectivity on certain classes of planar graphs, and show the bound is sometimes exponentially better than previous results. We then show Boolean formula evaluation reduces to deciding connectivity on just such a class of graphs. Applying the algorithm for $st$-connectivity to Boolean formula evaluation problems, we match the $O(\sqrt{N})$ bound on the quantum query complexity of evaluating formulas on $N$ variables, give a quadratic speed-up over the classical query complexity of a certain class of promise Boolean formulas, and show this approach can yield superpolynomial quantum/classical separations. These results indicate that this $st$-connectivity-based approach may be the "right" way of looking at quantum algorithms for formula evaluation.

Quantum 1, 26 (2017). https://doi.org/10.22331/q-2017-08-17-26

]]>We give a new upper bound on the quantum query complexity of deciding $st$-connectivity on certain classes of planar graphs, and show the bound is sometimes exponentially better than previous results. We then show Boolean formula evaluation reduces to deciding connectivity on just such a class of graphs. Applying the algorithm for $st$-connectivity to Boolean formula evaluation problems, we match the $O(\sqrt{N})$ bound on the quantum query complexity of evaluating formulas on $N$ variables, give a quadratic speed-up over the classical query complexity of a certain class of promise Boolean formulas, and show this approach can yield superpolynomial quantum/classical separations. These results indicate that this $st$-connectivity-based approach may be the "right" way of looking at quantum algorithms for formula evaluation.

]]>The minimal memory required to model a given stochastic process - known as the statistical complexity - is a widely adopted quantifier of structure in complexity science. Here, we ask if quantum mechanics can fundamentally change the qualitative behaviour of this measure. We study this question in the context of the classical Ising spin chain. In this system, the statistical complexity is known to grow monotonically with temperature. We evaluate the spin chain's quantum mechanical statistical complexity by explicitly constructing its provably simplest quantum model, and demonstrate that this measure exhibits drastically different behaviour: it rises to a maximum at some finite temperature then tends back towards zero for higher temperatures. This demonstrates how complexity, as captured by the amount of memory required to model a process, can exhibit radically different behaviour when quantum processing is allowed.

Quantum 1, 25 (2017). https://doi.org/10.22331/q-2017-08-11-25

]]>The minimal memory required to model a given stochastic process - known as the statistical complexity - is a widely adopted quantifier of structure in complexity science. Here, we ask if quantum mechanics can fundamentally change the qualitative behaviour of this measure. We study this question in the context of the classical Ising spin chain. In this system, the statistical complexity is known to grow monotonically with temperature. We evaluate the spin chain's quantum mechanical statistical complexity by explicitly constructing its provably simplest quantum model, and demonstrate that this measure exhibits drastically different behaviour: it rises to a maximum at some finite temperature then tends back towards zero for higher temperatures. This demonstrates how complexity, as captured by the amount of memory required to model a process, can exhibit radically different behaviour when quantum processing is allowed.

]]>The production of quantum states required for use in quantum protocols & technologies is studied by developing the tools to re-engineer a perfect state transfer spin chain so that a separable input excitation is output over multiple sites. We concentrate in particular on cases where the excitation is superposed over a small subset of the qubits on the spin chain, known as fractional revivals, demonstrating that spin chains are capable of producing a far greater range of fractional revivals than previously known, at high speed. We also provide a numerical technique for generating chains that produce arbitrary single-excitation states, such as the W state.

Quantum 1, 24 (2017). https://doi.org/10.22331/q-2017-08-10-24

]]>The production of quantum states required for use in quantum protocols & technologies is studied by developing the tools to re-engineer a perfect state transfer spin chain so that a separable input excitation is output over multiple sites. We concentrate in particular on cases where the excitation is superposed over a small subset of the qubits on the spin chain, known as fractional revivals, demonstrating that spin chains are capable of producing a far greater range of fractional revivals than previously known, at high speed. We also provide a numerical technique for generating chains that produce arbitrary single-excitation states, such as the W state.

]]>*This is a Perspective on "Causal hierarchy of multipartite Bell nonlocality" by Rafael Chaves, Daniel Cavalcanti, and Leandro Aolita, published in Quantum 1, 23 (2017).*

Quantum Views 1, 3 (2017).

https://doi.org/10.22331/qv-2017-08-04-3

**By Paul Skrzypczyk, School of Physics, University of Bristol, UK.**

The results of measurements performed locally on entangled quantum systems shared among multiple parties can be correlated in ways which are inexplicable by any classical mechanism. This phenomenon, known as Bell nonlocality, is a fundamental and fascinating aspect of quantum theory [1].

What is meant by classically inexplicable? It means that there is no classical model (often called a local hidden variable model) with the same underlying *causal structure* that can reproduce the quantum predictions. The causal structure is the implicit geometry of the setup – for example the fact that each party’s local measurement result is independent of the other parties’ measurement choices (known as no-signaling), or that the measurement choices are themselves independent of everything else (known as measurement independence).

If we consider trying to reproduce quantum predictions using classical mechanisms with *relaxed causal structure*, then indeed they can often do so. For example, if there is communication among all the parties, then this classical mechanism can readily reproduce any quantum correlation, nonlocal or not. What is remarkable is that some seemingly powerful causal relaxations still cannot reproduce all of the predictions of quantum theory. For example, if the communication is restricted to all but one of the parties, then this is not able to reproduce everything that can be achieved by making local measurements on multipartite entangled states [2]. The nonlocality of such ‘genuine multipartite nonlocal’ correlations is therefore shown to be very strong, as highlighted by the difficultly of classically reproducing them, even given much more freedom.

One barrier to having a systematic study of causal relaxations is that the number of relaxations grows exponentially in the number of parties. At least it seemed to naively. The main result of the work of Chaves, Cavalcanti and Aolita [3] is to identify that large classes of causal relaxations are in fact equivalent to each other, as far as the non-signaling correlations they can produce are concerned. What is really of interest then is the number of different inequivalent classes of causal relaxations, which they show is much smaller, and has a natural hierarchical structure, depending on the total number of relaxations introduced.

Focusing on the tripartite scenario, they demonstrate the power of their result by showing that there are in fact only 8 different inequivalent classes of causal relaxations which are interesting – ones which are not powerful enough to reproduce all non-signaling correlations. Previous results had shown that quantum correlations are inexplicable by 6 of these classes [4]. Of the remaining two classes, the first, which sits at the top of the hierarchy, and is termed the *star* (since one party receives communication from all others), is shown, somewhat astonishingly, not to be able to reproduce all quantum nonlocal correlations, despite being the strongest possible causal relaxation. The second class, termed the *circle* (since each party communicates to their neighbour in a circular fashion), is left as the intriguing class – known neither to reproduce all quantum correlations nor whether it falls short.

The true power of the results comes in the unifying and simplified view they provide for studying relaxed causal structures. What was previously a vast forest is now a well organised playground, ready to be explored, and played in, as we continue to push forward our understanding of quantum non-locality.

[1] See, e.g. for a comprehensive review N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani and S. Wehner, Bell nonlocality, Rev. Mod. Phys. 86, 419 (2014).

https://doi.org/10.1103/RevModPhys.86.419

[2] G. Svetlichny, Distinguishing three-body from two-body nonseparability by a Bell-type inequality, Phys. Rev. D 35, 3066 (1987).

https://doi.org/10.1103/PhysRevD.35.3066

[3] R. Chaves, D. Cavalcanti and L. Aolita, Causal hierarchy of multipartite Bell nonlocality, Quantum 1, 23 (2017).

https://doi.org/10.22331/q-2017-08-04-23

[4] N. S. Jones, N. Linden and S. Massar, Extent of multiparticle quantum nonlocality, Phys. Rev. A 71, 042329 (2005).

https://doi.org/10.1103/PhysRevA.71.042329

This perspective is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.

]]>As with entanglement, different forms of Bell nonlocality arise in the multipartite scenario. These can be defined in terms of relaxations of the causal assumptions in local hidden-variable theories. However, a characterisation of all the forms of multipartite nonlocality has until now been out of reach, mainly due to the complexity of generic multipartite causal models. Here, we employ the formalism of Bayesian networks to reveal connections among different causal structures that make a both practical and physically meaningful classification possible. Our framework holds for arbitrarily many parties. We apply it to study the tripartite scenario in detail, where we fully characterize all the nonlocality classes. Remarkably, we identify new highly nonlocal causal structures that cannot reproduce all quantum correlations. This shows, to our knowledge, the strongest form of quantum multipartite nonlocality known to date. Finally, as a by-product result, we derive a non-trivial Bell-type inequality with no quantum violation. Our findings constitute a significant step forward in the understanding of multipartite Bell nonlocality and open several venues for future research.

Quantum 1, 23 (2017). https://doi.org/10.22331/q-2017-08-04-23

]]>As with entanglement, different forms of Bell nonlocality arise in the multipartite scenario. These can be defined in terms of relaxations of the causal assumptions in local hidden-variable theories. However, a characterisation of all the forms of multipartite nonlocality has until now been out of reach, mainly due to the complexity of generic multipartite causal models. Here, we employ the formalism of Bayesian networks to reveal connections among different causal structures that make a both practical and physically meaningful classification possible. Our framework holds for arbitrarily many parties. We apply it to study the tripartite scenario in detail, where we fully characterize all the nonlocality classes. Remarkably, we identify new highly nonlocal causal structures that cannot reproduce all quantum correlations. This shows, to our knowledge, the strongest form of quantum multipartite nonlocality known to date. Finally, as a by-product result, we derive a non-trivial Bell-type inequality with no quantum violation. Our findings constitute a significant step forward in the understanding of multipartite Bell nonlocality and open several venues for future research.

]]>We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

Quantum 1, 22 (2017). https://doi.org/10.22331/q-2017-07-28-22

]]>We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

]]>Measurements of an object's temperature are important in many disciplines, from astronomy to engineering, as are estimates of an object's spatial configuration. We present the quantum optimal estimator for the temperature of a distant body based on the black body radiation received in the far-field. We also show how to perform separable quantum optimal estimates of the spatial configuration of a distant object, i.e. imaging. In doing so we necessarily deal with multi-parameter quantum estimation of incompatible observables, a problem that is poorly understood. We compare our optimal observables to the two mode analogue of lensed imaging and find that the latter is far from optimal, even when compared to measurements which are separable. To prove the optimality of the estimators we show that they minimise the cost function weighted by the quantum Fisher information---this is equivalent to maximising the average fidelity between the actual state and the estimated one.

Quantum 1, 21 (2017). https://doi.org/10.22331/q-2017-07-26-21

]]>Measurements of an object's temperature are important in many disciplines, from astronomy to engineering, as are estimates of an object's spatial configuration. We present the quantum optimal estimator for the temperature of a distant body based on the black body radiation received in the far-field. We also show how to perform separable quantum optimal estimates of the spatial configuration of a distant object, i.e. imaging. In doing so we necessarily deal with multi-parameter quantum estimation of incompatible observables, a problem that is poorly understood. We compare our optimal observables to the two mode analogue of lensed imaging and find that the latter is far from optimal, even when compared to measurements which are separable. To prove the optimality of the estimators we show that they minimise the cost function weighted by the quantum Fisher information---this is equivalent to maximising the average fidelity between the actual state and the estimated one.

]]>The notions of error and disturbance appearing in quantum uncertainty relations are often quantified by the discrepancy of a physical quantity from its ideal value. However, these real and ideal values are not the outcomes of simultaneous measurements, and comparing the values of unmeasured observables is not necessarily meaningful according to quantum theory. To overcome these conceptual difficulties, we take a different approach and define error and disturbance in an operational manner. In particular, we formulate both in terms of the probability that one can successfully distinguish the actual measurement device from the relevant hypothetical ideal by any experimental test whatsoever. This definition itself does not rely on the formalism of quantum theory, avoiding many of the conceptual difficulties of usual definitions. We then derive new Heisenberg-type uncertainty relations for both joint measurability and the error-disturbance tradeoff for arbitrary observables of finite-dimensional systems, as well as for the case of position and momentum. Our relations may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak any information about conjugate observables to the environment. We also show that Englert's wave-particle duality relation [PRL 77, 2154 (1996)] can be viewed as an error-disturbance uncertainty relation.

Quantum 1, 20 (2017). https://doi.org/10.22331/q-2017-07-25-20

]]>The notions of error and disturbance appearing in quantum uncertainty relations are often quantified by the discrepancy of a physical quantity from its ideal value. However, these real and ideal values are not the outcomes of simultaneous measurements, and comparing the values of unmeasured observables is not necessarily meaningful according to quantum theory. To overcome these conceptual difficulties, we take a different approach and define error and disturbance in an operational manner. In particular, we formulate both in terms of the probability that one can successfully distinguish the actual measurement device from the relevant hypothetical ideal by any experimental test whatsoever. This definition itself does not rely on the formalism of quantum theory, avoiding many of the conceptual difficulties of usual definitions. We then derive new Heisenberg-type uncertainty relations for both joint measurability and the error-disturbance tradeoff for arbitrary observables of finite-dimensional systems, as well as for the case of position and momentum. Our relations may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak any information about conjugate observables to the environment. We also show that Englert's wave-particle duality relation [PRL 77, 2154 (1996)] can be viewed as an error-disturbance uncertainty relation.

]]>We define the hitting time for a model of continuous-time open quantum walks in terms of quantum jumps. Our starting point is a master equation in Lindblad form, which can be taken as the quantum analogue of the rate equation for a classical continuous-time Markov chain. The quantum jump method is well known in the quantum optics community and has also been applied to simulate open quantum walks in discrete time. This method however, is well-suited to continuous-time problems. It is shown here that a continuous-time hitting problem is amenable to analysis via quantum jumps: The hitting time can be defined as the time of the first jump. Using this fact, we derive the distribution of hitting times and explicit exressions for its statistical moments. Simple examples are considered to illustrate the final results. We then show that the hitting statistics obtained via quantum jumps is consistent with a previous definition for a measured walk in discrete time [Phys. Rev. A 73, 032341 (2006)] (when generalised to allow for non-unitary evolution and in the limit of small time steps). A caveat of the quantum-jump approach is that it relies on the final state (the state which we want to hit) to share only incoherent edges with other vertices in the graph. We propose a simple remedy to restore the applicability of quantum jumps when this is not the case and show that the hitting-time statistics will again converge to that obtained from the measured discrete walk in appropriate limits.

Quantum 1, 19 (2017). https://doi.org/10.22331/q-2017-07-21-19

]]>We define the hitting time for a model of continuous-time open quantum walks in terms of quantum jumps. Our starting point is a master equation in Lindblad form, which can be taken as the quantum analogue of the rate equation for a classical continuous-time Markov chain. The quantum jump method is well known in the quantum optics community and has also been applied to simulate open quantum walks in discrete time. This method however, is well-suited to continuous-time problems. It is shown here that a continuous-time hitting problem is amenable to analysis via quantum jumps: The hitting time can be defined as the time of the first jump. Using this fact, we derive the distribution of hitting times and explicit exressions for its statistical moments. Simple examples are considered to illustrate the final results. We then show that the hitting statistics obtained via quantum jumps is consistent with a previous definition for a measured walk in discrete time [Phys. Rev. A 73, 032341 (2006)] (when generalised to allow for non-unitary evolution and in the limit of small time steps). A caveat of the quantum-jump approach is that it relies on the final state (the state which we want to hit) to share only incoherent edges with other vertices in the graph. We propose a simple remedy to restore the applicability of quantum jumps when this is not the case and show that the hitting-time statistics will again converge to that obtained from the measured discrete walk in appropriate limits.

]]>We investigate decoupling, one of the most important primitives in quantum Shannon theory, by replacing the uniformly distributed random unitaries commonly used to achieve the protocol, with repeated applications of random unitaries diagonal in the Pauli-$Z$ and -$X$ bases. This strategy was recently shown to achieve an approximate unitary $2$-design after a number of repetitions of the process, which implies that the strategy gradually achieves decoupling. Here, we prove that even fewer repetitions of the process achieve decoupling at the same rate as that with the uniform ones, showing that rather imprecise approximations of unitary $2$-designs are sufficient for decoupling. We also briefly discuss efficient implementations of them and implications of our decoupling theorem to coherent state merging and relative thermalisation.

Quantum 1, 18 (2017). https://doi.org/10.22331/q-2017-07-21-18

]]>We investigate decoupling, one of the most important primitives in quantum Shannon theory, by replacing the uniformly distributed random unitaries commonly used to achieve the protocol, with repeated applications of random unitaries diagonal in the Pauli-$Z$ and -$X$ bases. This strategy was recently shown to achieve an approximate unitary $2$-design after a number of repetitions of the process, which implies that the strategy gradually achieves decoupling. Here, we prove that even fewer repetitions of the process achieve decoupling at the same rate as that with the uniform ones, showing that rather imprecise approximations of unitary $2$-designs are sufficient for decoupling. We also briefly discuss efficient implementations of them and implications of our decoupling theorem to coherent state merging and relative thermalisation.

]]>We introduce a multi-mode squeezing coefficient to characterize entanglement in $N$-partite continuous-variable systems. The coefficient relates to the squeezing of collective observables in the $2N$-dimensional phase space and can be readily extracted from the covariance matrix. Simple extensions further permit to reveal entanglement within specific partitions of a multipartite system. Applications with nonlinear observables allow for the detection of non-Gaussian entanglement.

Quantum 1, 17 (2017). https://doi.org/10.22331/q-2017-07-14-17

]]>We introduce a multi-mode squeezing coefficient to characterize entanglement in $N$-partite continuous-variable systems. The coefficient relates to the squeezing of collective observables in the $2N$-dimensional phase space and can be readily extracted from the covariance matrix. Simple extensions further permit to reveal entanglement within specific partitions of a multipartite system. Applications with nonlinear observables allow for the detection of non-Gaussian entanglement.

]]>In this work we consider the ground space connectivity problem for commuting local Hamiltonians. The ground space connectivity problem asks whether it is possible to go from one (efficiently preparable) state to another by applying a polynomial length sequence of 2-qubit unitaries while remaining at all times in a state with low energy for a given Hamiltonian $H$. It was shown in [Gharibian and Sikora, ICALP15] that this problem is QCMA-complete for general local Hamiltonians, where QCMA is defined as QMA with a classical witness and BQP verifier. Here we show that the commuting version of the problem is also QCMA-complete. This provides one of the first examples where commuting local Hamiltonians exhibit complexity theoretic hardness equivalent to general local Hamiltonians.

Quantum 1, 16 (2017). https://doi.org/10.22331/q-2017-07-14-16

]]>In this work we consider the ground space connectivity problem for commuting local Hamiltonians. The ground space connectivity problem asks whether it is possible to go from one (efficiently preparable) state to another by applying a polynomial length sequence of 2-qubit unitaries while remaining at all times in a state with low energy for a given Hamiltonian $H$. It was shown in [Gharibian and Sikora, ICALP15] that this problem is QCMA-complete for general local Hamiltonians, where QCMA is defined as QMA with a classical witness and BQP verifier. Here we show that the commuting version of the problem is also QCMA-complete. This provides one of the first examples where commuting local Hamiltonians exhibit complexity theoretic hardness equivalent to general local Hamiltonians.

]]>*This is an Editorial on "Classification of all alternatives to the Born rule in terms of informational properties" by Thomas D. Galley and Lluis Masanes, published in Quantum 1, 15 (2017).*

Quantum Views 1, 2 (2017).

https://doi.org/10.22331/qv-2017-07-14-2

**By Eric Cavalcanti, Centre for Quantum Dynamics, Griffith University.**

One of the burning questions within quantum foundations is “Why the Quantum?” — what makes quantum theory special, singling it out from the space of possible theories?

The celebrated Gleason’s theorem is one of the earliest in a class of results that select some parts of the quantum formalism and aim to derive the rest from it. Gleason shows that if we assume the quantum representation of measurement outcomes as projectors on a Hilbert space, then any noncontextual assignment of probabilities has the same form as the quantum Born rule. Others, such as Deutsch and Zurek, have proposed derivations of the Born rule from the structure of the quantum state space and dynamics (plus some extra assumptions). One of the aims of this latter approach is to resolve the measurement problem within an Everettian “no-collapse” interpretation. Whether they achieve that aim, however, remains a matter of controversy.

The present paper likewise starts from the assumption that states and transformations have the same structure as in quantum theory, and asks what are all possible alternatives to represent measurements and probability rules compatible with those. Given this classification, what principles could single out the quantum Born rule?

The work is set within the framework of *generalised probabilistic theories* (GPTs). Based on work from Lucien Hardy, it provides a bare-bones description of physical theories through their operational implications, as tools to calculate probabilities for outcomes of measurements, given the state preparations and transformations allowed by the theory. Finding a resolution to “Why the Quantum” then reduces to finding “reasonable” physical principles that allow one to single out quantum theory from the space of GPTs.

Galley and Masanes draw heavily upon group representation to show that all the alternatives compatible with the structure of the quantum state space and dynamics are in correspondence to a certain class of representations of the unitary group. This provides a full classification of all theories with alternative measurement postulates to the standard quantum ones. Quantum theory is then picked out as the unique theory that satisfies two hypotheses: no-restriction on measurements and pure-state bit symmetry.

Informally, ‘no-restriction’ postulates that all possible measurements on a state space are allowed by the theory. Bit symmetry is the requirement that any pair of distinguishable states can be mapped into any other pair of distinguishable states via an allowed transformation. While no restriction has a less direct operational meaning, bit-symmetry has an information-theoretic interpretation, and is related to the possibility of reversible computation.

The present work represents a significant technical contribution to the field of generalised probabilistic theories, and opens several questions, including the effect of including measurement update rules, composition of systems, and the information-processing capabilities of the classes of alternative theories introduced here.

This editorial is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.

]]>The standard postulates of quantum theory can be divided into two groups: the first one characterizes the structure and dynamics of pure states, while the second one specifies the structure of measurements and the corresponding probabilities. In this work we keep the first group of postulates and characterize all alternatives to the second group that give rise to finite-dimensional sets of mixed states. We prove a correspondence between all these alternatives and a class of representations of the unitary group. Some features of these probabilistic theories are identical to quantum theory, but there are important differences in others. For example, some theories have three perfectly distinguishable states in a two-dimensional Hilbert space. Others have exotic properties such as lack of bit symmetry, the violation of no simultaneous encoding (a property similar to information causality) and the existence of maximal measurements without phase groups. We also analyze which of these properties single out the Born rule.

Quantum 1, 15 (2017). https://doi.org/10.22331/q-2017-07-14-15

]]>The standard postulates of quantum theory can be divided into two groups: the first one characterizes the structure and dynamics of pure states, while the second one specifies the structure of measurements and the corresponding probabilities. In this work we keep the first group of postulates and characterize all alternatives to the second group that give rise to finite-dimensional sets of mixed states. We prove a correspondence between all these alternatives and a class of representations of the unitary group. Some features of these probabilistic theories are identical to quantum theory, but there are important differences in others. For example, some theories have three perfectly distinguishable states in a two-dimensional Hilbert space. Others have exotic properties such as lack of bit symmetry, the violation of no simultaneous encoding (a property similar to information causality) and the existence of maximal measurements without phase groups. We also analyze which of these properties single out the Born rule.

]]>In this work we present a security analysis for quantum key distribution, establishing a rigorous tradeoff between various protocol and security parameters for a class of entanglement-based and prepare-and-measure protocols. The goal of this paper is twofold: 1) to review and clarify the stateof-the-art security analysis based on entropic uncertainty relations, and 2) to provide an accessible resource for researchers interested in a security analysis of quantum cryptographic protocols that takes into account finite resource effects. For this purpose we collect and clarify several arguments spread in the literature on the subject with the goal of making this treatment largely self-contained. More precisely, we focus on a class of prepare-and-measure protocols based on the Bennett-Brassard (BB84) protocol as well as a class of entanglement-based protocols similar to the Bennett-Brassard-Mermin (BBM92) protocol. We carefully formalize the different steps in these protocols, including randomization, measurement, parameter estimation, error correction and privacy amplification, allowing us to be mathematically precise throughout the security analysis. We start from an operational definition of what it means for a quantum key distribution protocol to be secure and derive simple conditions that serve as sufficient condition for secrecy and correctness. We then derive and eventually discuss tradeoff relations between the block length of the classical computation, the noise tolerance, the secret key length and the security parameters for our protocols. Our results significantly improve upon previously reported tradeoffs.

Quantum 1, 14 (2017). https://doi.org/10.22331/q-2017-07-14-14

]]>In this work we present a security analysis for quantum key distribution, establishing a rigorous tradeoff between various protocol and security parameters for a class of entanglement-based and prepare-and-measure protocols. The goal of this paper is twofold: 1) to review and clarify the stateof-the-art security analysis based on entropic uncertainty relations, and 2) to provide an accessible resource for researchers interested in a security analysis of quantum cryptographic protocols that takes into account finite resource effects. For this purpose we collect and clarify several arguments spread in the literature on the subject with the goal of making this treatment largely self-contained. More precisely, we focus on a class of prepare-and-measure protocols based on the Bennett-Brassard (BB84) protocol as well as a class of entanglement-based protocols similar to the Bennett-Brassard-Mermin (BBM92) protocol. We carefully formalize the different steps in these protocols, including randomization, measurement, parameter estimation, error correction and privacy amplification, allowing us to be mathematically precise throughout the security analysis. We start from an operational definition of what it means for a quantum key distribution protocol to be secure and derive simple conditions that serve as sufficient condition for secrecy and correctness. We then derive and eventually discuss tradeoff relations between the block length of the classical computation, the noise tolerance, the secret key length and the security parameters for our protocols. Our results significantly improve upon previously reported tradeoffs.

]]>Macro-realism is the position that certain macroscopic observables must always possess definite values: e.g. the table is in some definite position, even if we do not know what that is precisely. The traditional understanding is that by assuming macro-realism one can derive the Leggett-Garg inequalities, which constrain the possible statistics from certain experiments. Since quantum experiments can violate the Leggett-Garg inequalities, this is taken to rule out the possibility of macro-realism in a quantum universe. However, recent analyses have exposed loopholes in the Leggett-Garg argument, which allow many types of macro-realism to be compatible with quantum theory and hence violation of the Leggett-Garg inequalities. This paper takes a different approach to ruling out macro-realism and the result is a no-go theorem for macro-realism in quantum theory that is stronger than the Leggett-Garg argument. This approach uses the framework of ontological models: an elegant way to reason about foundational issues in quantum theory which has successfully produced many other recent results, such as the PBR theorem.

Quantum 1, 13 (2017). https://doi.org/10.22331/q-2017-07-14-13

]]>Macro-realism is the position that certain macroscopic observables must always possess definite values: e.g. the table is in some definite position, even if we do not know what that is precisely. The traditional understanding is that by assuming macro-realism one can derive the Leggett-Garg inequalities, which constrain the possible statistics from certain experiments. Since quantum experiments can violate the Leggett-Garg inequalities, this is taken to rule out the possibility of macro-realism in a quantum universe. However, recent analyses have exposed loopholes in the Leggett-Garg argument, which allow many types of macro-realism to be compatible with quantum theory and hence violation of the Leggett-Garg inequalities. This paper takes a different approach to ruling out macro-realism and the result is a no-go theorem for macro-realism in quantum theory that is stronger than the Leggett-Garg argument. This approach uses the framework of ontological models: an elegant way to reason about foundational issues in quantum theory which has successfully produced many other recent results, such as the PBR theorem.

]]>First, a clarification: Quantum’s social media accounts are currently managed by the Executive Board, and are their exclusive responsibility. They do not represent the views of Quantum as a whole unless explicitly said otherwise. In particular, sharing of opinion articles does not imply endorsement.

Content shared by Quantum generally falls into the following categories:

**Papers**published in Quantum and followups (such as editorials, perspectives and further media coverage).

**News**related to Quantum (for example updates on policies, outreach events and media coverage).

- News about
**quantum science**that are of interest to the larger community, at our discretion.

Since we aim to be a international venue for quantum sciences, without any regional bias, for now we have decided not to advertise local workshops, initiatives and programs, no matter how personally supportive we may be of them. The only workshops and events mentioned are in the context of Quantum doing outreach there and generally only after the event has taken place.

- Analyses and opinion articles about
**life in academia**, at our discretion.

Examples of topics that we may touch are excessive pressure on academics, mental health in academia, endemic problems in the job market, harassement and systemic discrimination. Whenever possible, we will favour general analyses of a phenomenon rather than news of a particular instance.

- News, analyses and opinion articles about
**academic publishing and open science**, at our discretion (here we may refer to individual events).

Quantum may invite individuals to **write opinion articles** on any of the subjects described above. Those will be published on Quantum’s website (offically in the online journal Quantum Views under a CC BY 4.0 licence, such that copyright remains with the authors). We expect most of these articles to be editorial-like viewpoints (“perspectives”) about papers published in Quantum.

We will review these guidelines at the end of 2017.

]]>SciPost‘s Jean-Sébastien Caux and Quantum’s Christian Gogolin were both invited to speak about “the future of scientific publishing” at a dedicated session on that topic during YRM 2017 in Tarragona.

SciPost and Quantum, while independent endeavors, share many of their core values and have many common goals. Together with the participants of YRM, topics such as the opportunities and dangers of open-access publishing, the relevance of new technologies for publishing, and the influence of funding policies on publishing were discussed.

We thank the participants of YRM 2017 for the very interesting discussions and the organizers for the opportunity to participate!

]]>Quantum has been assigned an International Standard Serial Number (ISSN) by the ISSN International Centre.

The ISSN is similar to the more widely known ISBN, which is used to uniquely identify books. The ISSN achieves the same, but for periodically appearing publications, like journals. Quantum is thus from now on uniquely identified by its ISSN **2521-327X**.

Having obtained an ISSN enables us to proceed with developing Quantum further and will allow us to apply for inclusion in the Web of Science and the Directory of Open Access Journals.

Papers already published in Quantum (and the respective .bib files) have been updated to include the ISSN.

]]>