Experimental demonstration of Continuous-Variable Quantum Key Distribution with a silicon photonics integrated receiver

Quantum Key Distribution (QKD) is a prominent application in the field of quantum cryptography providing information-theoretic security for secret key exchange. The implementation of QKD systems on photonic integrated circuits (PICs) can reduce the size and cost of such systems and facilitate their deployment in practical infrastructures. To this end, continuous-variable (CV) QKD systems are particularly well-suited as they do not require single-photon detectors, whose integration is presently challenging. Here we present a CV-QKD receiver based on a silicon PIC capable of performing balanced detection. We characterize its performance in a laboratory QKD setup using a frequency multiplexed pilot scheme with specifically designed data processing allowing for high modulation and secret key rates. The obtained excess noise values are compatible with asymptotic secret key rates of 2.4 Mbit/s and 220 kbit/s at an emulated distance of 10 km and 23 km, respectively. These results demonstrate the potential of this technology towards fully integrated devices suitable for high-speed, metropolitan-distance secure communication.

Experimental Certification of Quantum Transmission via Bell's Theorem

Quantum transmission links are central elements in essentially all implementations of quantum information protocols. Emerging progress in quantum technologies involving such links needs to be accompanied by appropriate certification tools. In adversarial scenarios, a certification method can be vulnerable to attacks if too much trust is placed on the underlying system. Here, we propose a protocol in a device independent framework, which allows for the certification of practical quantum transmission links in scenarios where minimal assumptions are made about the functioning of the certification setup. In particular, we take unavoidable transmission losses into account by modeling the link as a completely-positive trace-decreasing map. We also, crucially, remove the assumption of independent and identically distributed samples, which is known to be incompatible with adversarial settings. Finally, in view of the use of the certified transmitted states for follow-up applications, our protocol moves beyond certification of the channel to allow us to estimate the quality of the transmitted state itself. To illustrate the practical relevance and the feasibility of our protocol with currently available technology we provide an experimental implementation based on a state-of-the-art polarization entangled photon pair source in a Sagnac configuration and analyze its robustness for realistic losses and errors.

Towards a Unified Quantum Protocol Framework: Classification, Implementation, and Use Cases

We present a framework for the unification and standardization of quantum network protocols, making their realization easier and expanding their use cases to a broader range of communities interested in quantum technologies. Our framework is available as an open-source repository, the Quantum Protocol Zoo. We follow a modular approach by identifying two key components: Functionality, which connects real-world applications; and Protocol, which is a set of instructions between two or many parties, at least one of which has a quantum device. Based on the different stages of the quantum internet and use-case in the commercialization of quantum communication, our framework classifies quantum cryptographic functionalities and the various protocol designs implementing these functionalities. Towards this classification, we introduce a novel concept of resource visualization for quantum protocols, which includes two interfaces: one to identify the building blocks for implementing a given protocol and another to identify accessible protocols when certain physical resources or functionalities are available. Such classification provides a hierarchy of quantum protocols based on their use-case and resource allocation. We have identified various valuable tools to improve its representation with a range of techniques, from abstract cryptography to graphical visualizations of the resource hierarchy in quantum networks. We elucidate the structure of the zoo and its primary features in this article to a broader class of quantum information scientists, physicists, computer science theorists and end-users. Since its introduction in 2018, the quantum protocol zoo has been a cornerstone in serving the quantum networks community in its ability to establish the use cases of emerging quantum internet networks. In that spirit we also provide some of the applications of our framework from different perspectives.

Establishing shared secret keys on quantum line networks: protocol and security

We show the security of multi-user key establishment on a single line of quantum communication. More precisely, we consider a quantum communication architecture where the qubit generation and measurement happen at the two ends of the line, whilst intermediate parties are limited to single-qubit unitary transforms. This network topology has been previously introduced to implement quantum-assisted secret-sharing protocols for classical data, as well as the key establishment, and secure computing. This architecture has numerous advantages. The intermediate nodes are only using simplified hardware, which makes them easier to implement. Moreover, key establishment between arbitrary pairs of parties in the network does not require key routing through intermediate nodes. This is in contrast with quantum key distribution (QKD) networks for which non-adjacent nodes need intermediate ones to route keys, thereby revealing these keys to intermediate parties and consuming previously established ones to secure the routing process. Our main result is to show the security of key establishment on quantum line networks. We show the security using the framework of abstract cryptography. This immediately makes the security composable, showing that the keys can be used for encryption or other tasks.

Towards the Impossibility of Quantum Public Key Encryption with Classical Keys from One-Way Functions

There has been a recent interest in proposing quantum protocols whose security relies on weaker computational assumptions than their classical counterparts. Importantly to our work, it has been recently shown that public-key encryption (PKE) from one-way functions (OWF) is possible if we consider quantum public keys. Notice that we do not expect classical PKE from OWF given the impossibility results of Impagliazzo and Rudich (STOC'89). However, the distribution of quantum public keys is a challenging task. Therefore, the main question that motivates our work is if quantum PKE from OWF is possible if we have classical public keys. Such protocols are impossible if ciphertexts are also classical, given the impossibility result of Austrin et al. (CRYPTO'22) of quantum enhanced key-agreement (KA) with classical communication. In this paper, we focus on black-box separation for PKE with classical public key and quantum ciphertext from OWF under the polynomial compatibility conjecture, first introduced in Austrin et al.. More precisely, we show the separation when the decryption algorithm of the PKE does not query the OWF. We prove our result by extending the techniques of Austrin et al. and we show an attack for KA in an extended classical communication model where the last message in the protocol can be a quantum state.

The quantum switch is uniquely defined by its action on unitary operations

The quantum switch is a quantum process that creates a coherent control between different unitary operations, which is often described as a quantum process which transforms a pair of unitary operations ( U 1 , U 2 ) into a controlled unitary operation that coherently applies them in different orders as |0><0| \otimes U_1U_2 + |1><1| \otimes U_2U_1 . This description, however, does not directly define its action on non-unitary operations. The action of the quantum switch on non-unitary operations is then chosen to be a ``natural’’ extension of its action on unitary operations. In general, the action of a process on non-unitary operations is not uniquely determined by its action on unitary operations. It may be that there could be a set of inequivalent extensions of the quantum switch for non-unitary operations. We prove, however, that the natural extension is the only possibility for the quantum switch for the 2-slot case. In other words, contrary to the general case, the action of the quantum switch on non-unitary operations (as a linear and completely CP preserving supermap) is completely determined by its action on unitary operations. We also discuss the general problem of when the complete description of a quantum process is uniquely determined by its action on unitary operations and identify a set of single-slot processes which are completely defined by their action on unitary operations.

Corrected Bell and Noncontextuality Inequalities for Realistic Experiments

Contextuality is a feature of quantum correlations. It is crucial from a foundational perspective as a nonclassical phenomenon, and from an applied perspective as a resource for quantum advantage. It is commonly defined in terms of hidden variables, for which it forces a contradiction with the assumptions of parameter-independence and determinism. The former can be justified by the empirical property of non-signalling or non-disturbance, and the latter by the empirical property of measurement sharpness. However, in realistic experiments neither empirical property holds exactly, which leads to possible objections to contextuality as a form of nonclassicality, and potential vulnerabilities for supposed quantum advantages. We introduce measures to quantify both properties, and introduce quantified relaxations of the corresponding assumptions. We prove the continuity of a known measure of contextuality, the contextual fraction, which ensures its robustness to noise. We then bound the extent to which these relaxations can account for contextuality, via corrections terms to the contextual fraction (or to any noncontextuality inequality), culminating in a notion of genuine contextuality, which is robust to experimental imperfections. We then show that our result is general enough to apply or relate to a variety of established results and experimental setups.

Quantum many-body dynamics for combinatorial optimisation and machine learning

The goal of this thesis is to explore and qualify the use of N-body quantum dynamics to Tsolve hard industrial problems and machine learning tasks. As a collaboration between industrial and academic partners, this thesis explores the capabilities of a neutral atom device in tackling real-world problems. First, we look at combinatorial optimisation problems and showcase how neutral atoms can naturally encode a famous combinatorial optimisation problem called the Maximum Independent Set on Unit-Disk graphs. These problems appear in industrial challenges such as Smart-Charging of electric vehicles. The goal is to understand why and how we can expect a quantum approach to solve this problem more efficiently than classical method and our proposed algorithms are tested on real hardware using a dataset from EDF, the French Electrical company. We furthermore explore the use of 3D neutral atoms to tackle problems that are out of reach of classical approximation methods. Finally, we try to improve our intuition on the types of instances for which a quantum approach can(not) yield better results than classical methods. In the second part of this thesis, we explore the use of quantum dynamics in the field of machine learning. In addition of being a great chain of buzzwords, Quantum Machine Learning (QML) has been increasingly investigated in the past years. In this part, we propose and implement a quantum protocol for machine learning on datasets of graphs, and show promising results regarding the complexity of the associated feature space. Finally, we explore the expressivity of quantum machine learning models and showcase examples where classical methods can efficiently approximate quantum machine learning models.

Learning unitaries with quantum statistical queries

We propose several algorithms for learning unitary operators from quantum statistical queries (QSQs) with respect to their Choi-Jamiolkowski state. Quantum statistical queries capture the capabilities of a learner with limited quantum resources, which receives as input only noisy estimates of expected values of measurements. Our methods hinge on a novel technique for estimating the Fourier mass of a unitary on a subset of Pauli strings with a single quantum statistical query, generalizing a previous result for uniform quantum examples. Exploiting this insight, we show that the quantum Goldreich-Levin algorithm can be implemented with quantum statistical queries, whereas the prior version of the algorithm involves oracle access to the unitary and its inverse. Moreover, we prove that Oplog nqjuntas and quantum Boolean functions with constant total influence are efficiently learnable in our model, and constant-depth circuits are learnable sample-efficiently with quantum statistical queries. On the other hand, all previous algorithms for these tasks require direct access to the Choi-Jamiolkowski state or oracle access to the unitary. In addition, our upper bounds imply that the actions of those classes of unitaries on locally scrambled ensembles can be efficiently learned. We also demonstrate that, despite these positive results, quantum statistical queries lead to an exponentially larger sample complexity for certain tasks, compared to separable measurements to the Choi-Jamiolkowski state. In particular, we show an exponential lower bound for learning a class of phase-oracle unitaries and a double exponential lower bound for testing the unitarity of channels, adapting to our setting previous arguments for quantum states. Finally, we propose a new definition of average-case surrogate models, showing a potential application of our results to hybrid quantum machine learning.

Nonlocality activation in a photonic quantum network

Bell nonlocality refers to correlations between two distant, entangled particles that challenge classical notions of local causality. Beyond its foundational significance, nonlocality is crucial for device-independent technologies like quantum key distribution and randomness generation. Nonlocality quickly deteriorates in the presence of noise, and restoring nonlocal correlations requires additional resources. These often come in the form of many instances of the input state and joint measurements, incurring a significant resource overhead. Here, we experimentally demonstrate that single copies of Bell-local states, incapable of violating any standard Bell inequality, can give rise to nonlocality after being embedded into a quantum network of multiple parties. We subject the initial entangled state to a quantum channel that broadcasts part of the state to two independent receivers and certify the nonlocality in the resulting network by violating a tailored Bell-like inequality. We obtain these results without making any assumptions about the prepared states, the quantum channel, or the validity of quantum theory. Our findings have fundamental implications for nonlocality and enable the practical use of nonlocal correlations in real-world applications, even in scenarios dominated by noise.

A tale of resilience: On the practical security of masked software implementations

Masking constitutes a provably-secure approach against side-channel attacks. However, recombination effects(e.g., transitions) severely reduce the proven security. Concerning the software domain, CPU microarchitectures encompass techniques improving the execution performances. Several studies show that such techniques induce recombination effects. Furthermore, these techniques implicitly induce some form of parallelism, and the potential associated threat has never been investigated. In addition, the practical security of masking relies on the chosen masking scheme. Few works analysed the security of software protected by different masking schemes, and none considered the parallelism threat. Thus, literature lacks of a more comprehensive investigation on the practical security of software implementations relying on various masking schemes in presence of micro-architecture-induced recombination effects and parallelism. This work performs a first step to fill this gap. Specifically, we evaluate the practical security offered by first-order Boolean, arithmetic-sum and inner-product masking against transitions and parallelism in software. We firstly assess the presence of transition and parallel-based leakages in software. Secondly, we evaluate the security of the encodings of the selected masking schemes with respect to each leakage source via micro-benchmarks. Thirdly, we assess the practical security of different AES-128 software implementations, one for each selected masking scheme. We carry out the investigation on the STM32F215 and STM32F303 micro-controllers. We show that (1) CPU’s parallel features allow successful attacks against masked implementations resistant to transition-based leakages; (2) implementation choices (e.g., finite field multiplication) impact on the practical security of masked software implementations in presence of recombination effects.

A unifying framework for differentially private quantum algorithms

Differential privacy is a widely used notion of security that enables the processing of sensitive information. In short, differentially private algorithms map “neighbouring” inputs to close output distributions. Prior work proposed several quantum extensions of differential privacy, each of them built on substantially different notions of neighbouring quantum states. In this paper, we propose a novel and general definition of neighbouring quantum states. We demonstrate that this definition captures the underlying structure of quantum encodings and can be used to provide exponentially tighter privacy guarantees for quantum measurements. Our approach combines the addition of classical and quantum noise and is motivated by the noisy nature of near-term quantum devices. Moreover, we also investigate an alternative setting where we are provided with multiple copies of the input state. In this case, differential privacy can be ensured with little loss in accuracy combining concentration of measure and noise-adding mechanisms. En route, we prove the advanced joint convexity of the quantum hockey-stick divergence and we demonstrate how this result can be applied to quantum differential privacy. Finally, we complement our theoretical findings with an empirical estimation of the certified adversarial robustness ensured by differentially private measurements.

Higher-order Process Matrix Tomography of a passively-stable Quantum SWITCH

The field of indefinite causal order (ICO) has seen a recent surge in interest. Much of this research has focused on the quantum SWITCH, wherein multiple parties act in a superposition of different orders in a manner transcending the quantum circuit model. This results in a new resource for quantum protocols, and is exciting for its relation to issues in foundational physics. The quantum SWITCH is also an example of a higher-order quantum operation, in that it not only transforms quantum states, but also other quantum operations. To date, no higher-order quantum operation has been completely experimentally characterized. Indeed, past work on the quantum SWITCH has confirmed its ICO by measuring causal witnesses or demonstrating resource advantages, but the complete process matrix has only been described theoretically. Here, we perform higher-order quantum process tomography. However, doing so requires exponentially many measurements with a scaling worse than standard process tomography. We overcome this challenge by creating a new passively-stable fiber-based quantum SWITCH using active optical elements to deterministically generate and manipulate time-bin encoded qubits. Moreover, our new architecture for the quantum SWITCH can be readily scaled to multiple parties. By reconstructing the process matrix, we estimate its fidelity and tailor different causal witnesses directly for our experiment. To achieve this, we measure a set of tomographically complete settings, that also spans the input operation space. Our tomography protocol allows for the characterization and debugging of higher-order quantum operations with and without an ICO, while our experimental time-bin techniques could enable the creation of a new realm of higher-order quantum operations with an ICO.

Transformations between arbitrary (quantum) objects and the emergence of indefinite causality

Many fundamental and key objects in quantum mechanics are linear mappings between particular affine/linear spaces. This structure includes basic quantum elements such as states, measurements, channels, instruments, non-signalling channels and channels with memory, and also higher-order operations such as superchannels, quantum combs, n-time processes, testers, and process matrices which may not respect a definite causal order. Deducing and characterising their structural properties in terms of linear and semidefinite constraints is not only of foundational relevance, but plays an important role in enabling the numerical optimization over sets of quantum objects and allowing simpler connections between different concepts and objects. Here, we provide a general framework to deduce these properties in a direct and easy to use way. Additionally, while primarily guided by practical quantum mechanical considerations, we extend our analysis to mappings between \textit{general} linear/affine spaces and derive their properties, opening the possibility for analysing sets which are not explicitly forbidden by quantum theory, but are still not much explored. Together, these results yield versatile and readily applicable tools for all tasks that require the characterization of linear transformations, in quantum mechanics and beyond. As an application of our methods, we discuss the emergence of indefinite causality in higher-order quantum transformation.

Photonic Resources for the Implementation of Quantum Network Protocols

The security of modern communication networks can be enhanced thanks to the laws of quantum mechanics. In this thesis, we develop a source of photon-pairs, emitted via spontaneous parametric down-conversion, which we use to demonstrate new quantum-cryptographic primitives. Pairs are used as heralded single-photons or as close-to-maximally entangled pairs. We also provide a novel design in order to adapt this source to multipartite entanglement generation. We provide the first experimental implementation of quantum weak coin flipping protocol. It allows two distant players to decide of a random winner. We demonstrate a refined and loss-tolerent version of a recently proposed theoretical protocol, using heralded single-photons mixed with vacuum to produce entanglement. It displays cheat-sensitivity, allowed by quantum interference and a fast optical switch. We also provide a new protocol for certifying the transmission of an unmeasured qubit through a lossy and untrusted channel. The security is based on new fundamental results of lossy quantum channels. We device-independently test the channel’s quality, using self-testing of Bell or steering inequalities thanks to photon-pairs entangled in polarization to probe the channel. We show it allows the certification of quantum communication for a large amount of losses induced by the channel.

Classically Approximating Variational Quantum Machine Learning with Random Fourier Features

Many applications of quantum computing in the near term rely on variational quantum circuits (VQCs). They have been showcased as a promising model for reaching a quantum advantage in machine learning with current noisy intermediate scale quantum computers (NISQ). It is often believed that the power of VQCs relies on their exponentially large feature space, and extensive works have explored the expressiveness and trainability of VQCs in that regard. In our work, we propose a classical sampling method that may closely approximate a VQC with Hamiltonian encoding, given only the description of its architecture. It uses the seminal proposal of Random Fourier Features (RFF) and the fact that VQCs can be seen as large Fourier series. We provide general theoretical bounds for classically approximating models built from exponentially large quantum feature space by sampling a few frequencies to build an equivalent low dimensional kernel, and we show experimentally that this approximation is efficient for several encoding strategies. Precisely, we show that the number of required samples grows favorably with the size of the quantum spectrum. This tool therefore questions the hope for quantum advantage from VQCs in many cases, but conversely helps to narrow the conditions for their potential success. We expect VQCs with various and complex encoding Hamiltonians, or with large input dimension, to become more robust to classical approximations.

Quantum Lock: A Provable Quantum Communication Advantage

Physical unclonable functions(PUFs) provide a unique fingerprint to a physical entity by exploiting the inherent physical randomness. Gao et al. discussed the vulnerability of most current-day PUFs to sophisticated machine learning-based attacks. We address this problem by integrating classical PUFs and existing quantum communication technology. Specifically, this paper proposes a generic design of provably secure PUFs, called hybrid locked PUFs(HLPUFs), providing a practical solution for securing classical PUFs. An HLPUF uses a classical PUF(CPUF), and encodes the output into non-orthogonal quantum states to hide the outcomes of the underlying CPUF from any adversary. Here we introduce a quantum lock to protect the HLPUFs from any general adversaries. The indistinguishability property of the non-orthogonal quantum states, together with the quantum lockdown technique prevents the adversary from accessing the outcome of the CPUFs. Moreover, we show that by exploiting non-classical properties of quantum states, the HLPUF allows the server to reuse the challenge-response pairs for further client authentication. This result provides an efficient solution for running PUF-based client authentication for an extended period while maintaining a small-sized challenge-response pairs database on the server side. Later, we support our theoretical contributions by instantiating the HLPUFs design using accessible real-world CPUFs. We use the optimal classical machine-learning attacks to forge both the CPUFs and HLPUFs, and we certify the security gap in our numerical simulation for construction which is ready for implementation.

Experimental cheat-sensitive quantum weak coin flipping

As in modern communication networks, the security of quantum networks will rely on complex cryptographic tasks that are based on a handful of fundamental primitives. Weak coin flipping (WCF) is a significant such primitive which allows two mistrustful parties to agree on a random bit while they favor opposite outcomes. Remarkably, perfect information-theoretic security can be achieved in principle for quantum WCF. Here, we overcome conceptual and practical issues that have prevented the experimental demonstration of this primitive to date, and demonstrate how quantum resources can provide cheat sensitivity, whereby each party can detect a cheating opponent, and an honest party is never sanctioned. Such a property is not known to be classically achievable with information-theoretic security. Our experiment implements a refined, loss-tolerant version of a recently proposed theoretical protocol and exploits heralded single photons generated by spontaneous parametric down conversion, a carefully optimized linear optical interferometer including beam splitters with variable reflectivities and a fast optical switch for the verification step. High values of our protocol benchmarks are maintained for attenuation corresponding to several kilometers of telecom optical fiber.

QEnclave - A practical solution for secure quantum cloud computing

We introduce a secure hardware device named a QEnclave that can secure the remote execution of quantum operations while only using classical controls. This device extends to quantum computing from the classical concept of a secure enclave that isolates a computation from its environment to provide privacy and tamper-resistance. Remarkably, our QEnclave only performs single qubit rotations but can nevertheless be used to secure an arbitrary quantum computation even if the qubit source is controlled by an adversary. More precisely, by attaching a QEnclave to a quantum computer, a remote client controlling the QEnclave can securely delegate its computation to the server solely using classical communication. We investigate the security of our QEnclave by modeling it as an ideal functionality named remote state rotation (RSR). We show that this resource, similar to the previously introduced functionality of remote state preparation, allows blind delegated quantum computing with perfect security. Our proof under the Abstract Cryptography framework shows the construction of remote state preparation from remote state rotation while preserving security. An immediate consequence is the weakening of the requirements for blind delegated computation. While previous delegated protocols relied on a client that can either generate or measure quantum states, we show that this same functionality can be achieved with a client that only transforms quantum states without generating or measuring them.

Quantum security of subset cover problems

The subset cover problem for $k \geq 1$ hash functions, which can be seen as an extension of the collision problem, was introduced in 2002 by Reyzin and Reyzin to analyse the security of their hash-function based signature scheme HORS. The security of many hash-based signature schemes relies on this problem or a variant of this problem (e.g. HORS, SPHINCS, SPHINCS+, \dots). Recently, Yuan, Tibouchi and Abe (2022) introduced a variant to the subset cover problem, called restricted subset cover, and proposed a quantum algorithm for this problem. In this work, we prove that any quantum algorithm needs to make $\Omega\left(k^{-\frac{2^{k-1}}{2^k-1}}\cdot N^{\frac{2^{k-1}-1}{2^k-1}}\right)$ queries to the underlying hash functions to solve the restricted subset cover problem, which essentially matches the query complexity of the algorithm proposed by Yuan, Tibouchi and Abe. We also analyze the security of the general $(r,k)$-subset cover problem, which is the underlying problem that implies the unforgeability of HORS under a $r$-chosen message attack (for $r \geq 1$). We prove that a generic quantum algorithm needs to make $\Omega\left(N^{k/5}\right)$ queries to the underlying hash functions to find a $(1,k)$-subset cover. We also propose a quantum algorithm that finds a $(r,k)$-subset cover making $O\left(N^{k/(2+2r)}\right)$ queries to the $k$ hash functions.

Design and Optimization of Tools for the Quantum Internet

This thesis is written in the context of quantum Internet development. We try here to contribute to the community by discussing some security concerns and by providing detailed models and simulation studies of quantum internet architectures and protocols. We explore different aspects of quantum networks on the path to the Quantum Internet. After introducing basic quantum information notions, we define the Quantum Internet and highlight the main goals and challenges. Then, we list a few bipartite and multipartite applications. After that, we study the composable security of a multipartite entanglement verification protocol, that is used as a building block by many other protocols. In the following chapter, we perform simulations of different quantum repeater protocols allowing connection between two distant nodes. These repeaters use a defect in the crystalline structure of the diamond, that we model. Finally, the last two chapters are dedicated to building and simulating an international quantum network architecture that minimizes the necessary hardware for the end users. We first study a metropolitan network, called the Quantum City, that we simulate in a Parisian context. We highlight the main parameters and today’s performances. Then, we study the feasibility of connecting different quantum cities separated by hundred of kilometers using satellites.

Post-Quantum Zero-Knowledge with Space-Bounded Simulation

The traditional definition of quantum zero-knowledge stipulates that the knowledge gained by any quantum polynomial-time verifier in an interactive protocol can be simulated by a quantum polynomial-time algorithm. One drawback of this definition is that it allows the simulator to consume significantly more computational resources than the verifier. We argue that this drawback renders the existing notion of quantum zero-knowledge not viable for certain settings, especially when dealing with near-term quantum devices. In this work, we initiate a fine-grained notion of post-quantum zero-knowledge that is more compatible with near-term quantum devices. We introduce the notion of $(s,f)$ space-bounded quantum zero-knowledge. In this new notion, we require that an $s$-qubit malicious verifier can be simulated by a quantum polynomial-time algorithm that uses at most $f(s)$-qubits, for some function $f(\cdot)$, and no restriction on the amount of the classical memory consumed by either the verifier or the simulator. We explore this notion and establish both positive and negative results: - For verifiers with logarithmic quantum space $s$ and (arbitrary) polynomial classical space, we show that $(s,f)$-space-bounded QZK, for $f(s)=2s$, can be achieved based on the existence of post-quantum one-way functions. Moreover, our protocol runs in constant rounds. - For verifiers with super-logarithmic quantum space $s$, assuming the existence of post-quantum secure one-way functions, we show that $(s,f)$-space-bounded QZK protocols, with fully black-box simulation (classical analogue of black-box simulation) can only be achieved for languages in BQP.

Correction to: Dispelling myths on superposition attacks: formal security model and attack analyses
Bridging the gap between technology and policy in GDPR compliance: the role of differential privacy
Study of Protocols Between Classical Clients and a Quantum Server

Quantum computers promise surprising powers of computation by exploiting the stunning physical properties of infinitesimally small particles. I focused on designing and proving the security of protocols that allow a purely classical client to use the computational resources of a quantum server, so that the performed computation is never revealed to the server. To this end, I develop a modular tool to generate on a remote server a quantum state that only the client is able to describe, and I show how multi-qubits quantum states can be generated more efficiently. I also prove that there is no such protocol that is secure in a generally composable model of security, including when our module is used in the UBQC protocol. In addition to delegated computation, this tool also proves to be useful for performing a task that might seem impossible to achieve at first sight: proving advanced properties on a quantum state in a non-interactive and non-destructive way, including when this state is generated collaboratively by several participants. This can be seen as a quantum analogue of the classical Non-Interactive Zero-Knowledge proofs. This property is particularly useful to filter the participants of a protocol without revealing their identity, and may have applications in other domains, for example to transmit a quantum state over a network while hiding the source and destination of the message. Finally, I discuss my ongoing independent work on One-Time Programs, mixing quantum cryptography, error correcting codes and information theory.

Satellite-based Quantum Information Networks: Use cases, Architecture, and Roadmap

Quantum Information Networks (QIN) currently represent a major goal in the field quantum communication technologies. Such QINs will allow connecting quantum devices (computers, sensors, communication stations, etc) over long distances, thus improving significantly their intrinsic processing, sensing, and security capabilities. The core mechanism of a QIN is quantum state teleportation, demonstrated more than two decades ago, that consumes quantum entanglement which can be seen in this context as a new kind of network resource. This paper is the result of the collaboration under the auspices of the French Space agency (CNES) of academic research and a Space telecom industry actor that has defined and now executes a long term roadmap towards operational QINs. Here, we address the key elements of this roadmap and describe the stage we have reached in its execution. First, we identify and quantitatively describe use cases per activity sector as a reference for the requirements on the QINs, including key performance targets. Second, we define a high-level architecture of a generic QIN so as to introduce structuring elements such as resource, layers, governance, etc. We then focus on the architecture on the Space part to identify its main design drivers and critical elements. A survey of the state-of-the-art of these critical elements, as well as issues related to standardisation is then presented. Based on these elements, we explain our 3-stage roadmap. Finally, we detail the already concluded first step of this roadmap, that is the design of a Space-to-ground entanglement distribution demonstrator, which relies on detailed simulations so as to allocate efficiently the performance requirements on each subsystems. We invite relevant entities to join our roadmap to progress together towards the ambitious goal of operational QINs in the next decade.

Measurement-based quantum computation beyond qubits

Measurement-based quantum computation (MBQC) is an alternative model for quantum computation, which makes careful use of the properties of the measurement of entangled quantum systems to perform transformations on an input. It differs fundamentally from the standard quantum circuit model in that measurement-based computations are naturally irreversible. This is an unavoidable consequence of the quantum description of measurements, but begets an obvious question: when does an MBQC implement an effectively reversible computation? The measurement calculus is a framework for reasoning about MBQC with the remarkable feature that every computation can be related in a canonical way to a graph. This allows one to use graph-theoretical tools to reason about MBQC problems, such as the reversibility question, and the resulting study of MBQC has had a large range of applications. However, the vast majority of the work on MBQC has focused on architectures using the simplest possible quantum system: the qubit. It remains an open question how much of this work can be lifted to other quantum systems. In this thesis, we begin to tackle this question, by introducing analogues of the measurement calculus for higher- and infinite-dimensional quantum systems. More specifically, we consider the case of qudits when the local dimension is an odd prime, and of continuous-variable systems familiar from the quantum physics of free particles. In each case, a calculus is introduced and given a suitable interpretation in terms of quantum operations. We then relate the resulting models to the standard circuit models, using graph-theoretical tools called “flow” conditions.

Sample-efficient device-independent quantum state verification and certification

Authentication of quantum sources is a crucial task in building reliable and efficient protocols for quantum-information processing. Steady progress vis-à-vis verification of quantum devices in the scenario with fully characterized measurement devices has been observed in recent years. When it comes to the scenario with uncharacterized measurements, the so-called black-box scenario, practical verification methods are still rather scarce. Development of self-testing methods is an important step forward, but these results so far have been used for reliable verification only by considering the asymptotic behavior of large, identically and independently distributed (IID) samples of a quantum resource. Such strong assumptions deprive the verification procedure of its truly device-independent character. In this paper, we develop a systematic approach to device-independent verification of quantum states free of IID assumptions in the finite copy regime. Remarkably, we show that device-independent verification can be performed with optimal sample efficiency. Finally, for the case of independent copies, we develop a device-independent protocol for quantum state certification: a protocol in which a fragment of the resource copies is measured to warrant the rest of the copies to be close to some target state.

2022 Roadmap on integrated quantum photonics

Abstract Integrated photonics will play a key role in quantum systems as they grow from few-qubit prototypes to tens of thousands of qubits. The underlying optical quantum technologies can only be realized through the integration of these components onto quantum photonic integrated circuits (QPICs) with accompanying electronics. In the last decade, remarkable advances in quantum photonic integration have enabled table-top experiments to be scaled down to prototype chips with improvements in efficiency, robustness, and key performance metrics. These advances have enabled integrated quantum photonic technologies combining up to 650 optical and electrical components onto a single chip that are capable of programmable quantum information processing, chip-to-chip networking, hybrid quantum system integration, and high-speed communications. In this roadmap article, we highlight the status, current and future challenges, and emerging technologies in several key research areas in integrated quantum photonics, including photonic platforms, quantum and classical light sources, quantum frequency conversion, integrated detectors, and applications in computing, communications, and sensing. With advances in materials, photonic design architectures, fabrication and integration processes, packaging, and testing and benchmarking, in the next decade we can expect a transition from single- and few-function prototypes to large-scale integration of multi-functional and reconfigurable devices that will have a transformative impact on quantum information science and engineering.

Quantum Information Techniques for Quantum Metrology

Quantum metrology is an auspicious discipline of quantum information which is currently witnessing a surge of experimental breakthroughs and theoretical developments. The main goal of quantum metrology is to estimate unknown parameters as accurately as possible. By using quantum resources as probes, it is possible to attain a measurement precision that would be otherwise impossible using the best classical strategies. For example, with respect to the task of phase estimation, the maximum precision (the Heisenberg limit) is a quadratic gain in precision with respect to the best classical strategies. Of course, quantum metrology is not the sole quantum technology currently undergoing advances. The theme of this thesis is exploring how quantum metrology can be enhanced with other quantum techniques when appropriate, namely: graph states, error correction and cryptography. Graph states are an incredibly useful and versatile resource in quantum information. We aid in determining the full extent of the applicability of graph states by quantifying their practicality for the quantum metrology task of phase estimation. In particular, the utility of a graph state can be characterised in terms of the shape of the corresponding graph. From this, we devise a method to transform any graph state into a larger graph state (named a bundled graph state) which approximately saturates the Heisenberg limit. Additionally, we show that graph states are a robust resource against the effects of noise, namely dephasing and a small number of erasures, and that the quantum Cramér-Rao bound can be saturated with a simple measurement strategy. Noise is one of the biggest obstacles for quantum metrology that limits its achievable precision and sensitivity. It has been showed that if the environmental noise is distinguishable from the dynamics of the quantum metrology task, then frequent applications of error correction can be used to combat the effects of noise. In practise however, the required frequency of error correction to maintain Heisenberg-like precision is unobtainable for current quantum technologies. We explore the limitations of error correction enhanced quantum metrology by taking into consideration technological constraints and impediments, from which, we establish the regime in which the Heisenberg limit can be maintained in the presence of noise. Fully implementing a quantum metrology problem is technologically demanding: entangled quantum states must be generated and measured with high fidelity. One solution, in the instance where one lacks all of the necessary quantum hardware, is to delegate a task to a third party. In doing so, several security issues naturally arise because of the possibility of interference of a malicious adversary. We address these issues by developing the notion of a cryptographic framework for quantum metrology. We show that the precision of the quantum metrology problem can be directly related to the soundness of an employed cryptographic protocol. Additionally, we develop cryptographic protocols for a variety of cryptographically motivated settings, namely: quantum metrology over an unsecured quantum channel and quantum metrology with a task delegated to an untrusted party. Quantum sensing networks have been gaining interest in the quantum metrology community over the past few years. They are a natural choice for spatially distributed problems and multiparameter problems. The three proposed techniques, graph states, error correction and cryptography, are a natural fit to be immersed in quantum sensing network. Graph states are an well-known candidate for the description of a quantum network, error correction can be used to mitigate the effects of a noisy quantum channel, and the cryptographic framework of quantum metrology can be used to add a sense of security. Combining these works formally is a future perspective.

Multipartite communications over quantum networks

The field of quantum networks is currently a major area of investigation in quantum technologies. One of the simplest acts of quantum communication, the distribution of a single bipartite entangled state, has been highly studied as it is a simple problem to characterize, simulate and implement. It is also useful for a prominent quantum network application: the secured distribution of a cryptographic key. However, the use of quantum networks goes far beyond. We need to study the simultaneous distribution of multipartite states over quantum networks. In this manuscript, we report on several works of progress in the domain. We first study the recycling of previously distributed resources in the asymptotic regime by the use of entanglement combing and quantum state merging. Then, we characterize the distribution of quantum states using the tensor network formalism. We also characterize a broad class of classical distribution protocols by the same formalism and use this similarity to compare the distribution of classical correlations over classical networks to a the distribution of quantum state over quantum networks. We also build protocols to distribute specific classes of states over quantum networks such as graph states and GHZ states by using the graph state formalism and a bit of graph theory. Finally, we implement the previous protocols in a more realistic setting and participate in the elaboration of multipartite features for a quantum network simulator: QuISP. We also aimed to popularize the notions of quantum information to a broad audience. We report on the creation of a video game based on quantum optics, adding to the existing popularization ludography.

Tight adaptive reprogramming in the QROM

The random oracle model (ROM) enjoys widespread popularity, mostly because it tends to allow for tight and conceptually simple proofs where provable security in the standard model is elusive or costly. While being the adequate replacement of the ROM in the post-quantum security setting, the quantum-accessible random oracle model (QROM) has thus far failed to provide these advantages in many settings. In this work, we focus on adaptive reprogrammability, a feature of the ROM enabling tight and simple proofs in many settings. We show that the straightforward quantum-accessible generalization of adaptive reprogramming is feasible by proving a bound on the adversarial advantage in distinguishing whether a random oracle has been reprogrammed or not. We show that our bound is tight by providing a matching attack. We go on to demonstrate that our technique recovers the mentioned advantages of the ROM in three QROM applications: 1) We give a tighter proof of security of the message compression routine as used by XMSS. 2) We show that the standard ROM proof of chosen-message security for Fiat-Shamir signatures can be lifted to the QROM, straightforwardly, achieving a tighter reduction than previously known. 3) We give the first QROM proof of security against fault injection and nonce attacks for the hedged Fiat-Shamir transform.

Classical-quantum network coding: a story about tensor

We study here the conditions to perform the distribution of a pure state on a quantum network using quantum operations which can succeed with a non-zero probability, the Stochastic Local Operation and Classical Communication (SLOCC) operations. In their pioneering 2010 work, Kobayashi et al. showed how to convert any classical network coding protocol into a quantum network coding protocol. However, they left open whether the existence of a quantum network coding protocol implied the existence of a classical one. Motivated by this question, we characterize the set of distribution tasks achievable with non zero probability for both classical and quantum networks. We develop a formalism which encompasses both types of distribution protocols by reducing the solving of a distribution task to the factorization of a tensor with complex coefficients or real positive ones. Using this formalism, we examine the equivalences and differences between both types of distribution protocols exhibiting several elementary and fundamental relations between them as well as concrete examples of both convergence and divergence. We answer by the negative to the issue previously left open: some tasks are achievable in the quantum setting, but not in the classical one. We believe this formalism to be a useful tool for studying the extent of quantum network ability to perform multipartite distribution tasks.

Randomized Benchmarking with Stabilizer Verification and Gate Synthesis

Recently, there has been an emergence of useful applications for noisy intermediate-scale quantum (NISQ) devices notably, though not exclusively, in the fields of quantum machine learning and variational quantum algorithms. In such applications, circuits of various depths and composed of different sets of gates are run on NISQ devices. Therefore, it is crucial to find practical ways to capture the general performance of circuits on these devices. Motivated by this pressing need, we modified the standard Clifford randomized benchmarking (RB) and interleaved RB schemes targeting them to hardware limitations. Firstly we remove the requirement for, and assumptions on, the inverse operator, in Clifford RB by incorporating a tehchnique from quantum verification. This introduces another figure of merit by which to assess the quality of the NISQ hardware, namely the acceptance probability of quantum verification. Many quantum algorithms, that provide an advantage over classical algorithms, demand the use of Clifford as well as non-Clifford gates. Therefore, as our second contribution we develop a technique for characterising a variety of non-Clifford gates, by combining tools from gate synthesis with interleaved RB. Both of our techniques are most relevant when used in conjunction with RB schemes that benchmark generators (or native gates) of the Clifford group, and in low error regimes.

Non-Destructive Zero-Knowledge Proofs on Quantum States, and Multi-Party Generation of Authorized Hidden GHZ States

Due to the special no-cloning principle, quantum states appear to be very useful in cryptography. But this very same property also has drawbacks: when receiving a quantum state, it is nearly impossible for the receiver to efficiently check non-trivial properties on that state without destroying it. In this work, we initiate the study of Non-Destructive Zero-Knowledge Proofs on Quantum States. Our method binds a quantum state to a classical encryption of that quantum state. That way, the receiver can obtain guarantees on the quantum state by asking to the sender to prove properties directly on the classical encryption. This method is therefore non-destructive, and it is possible to verify a very large class of properties. For instance, we can force the sender to send different categories of states depending on whether they know a classical password or not. Moreover, we can also provide guarantees to the sender: for example, we can ensure that the receiver will never learn whether the sender knows the password or not. We also extend this method to the multi-party setting. We show how it can prove useful to distribute a GHZ state between different parties, in such a way that only parties knowing a secret can be part of this GHZ. Moreover, the identity of the parties that are part of the GHZ remains hidden to any malicious party. A direct application would be to allow a server to create a secret sharing of a qubit between unknown parties, authorized for example by a third party Certification Authority. Finally, we provide simpler “blind” versions of the protocols that could prove useful in Anonymous Transmission or Quantum Onion Routing, and we explicit a cryptographic function required in our protocols based on the Learning With Errors hardness problem.

Mitigating errors by quantum verification and post-selection

Correcting errors due to noise in quantum circuits run on current and near-term quantum hardware is essential for any convincing demonstration of quantum advantage. Indeed, in many cases it has been shown that noise renders quantum circuits efficiently classically simulable, thereby destroying any quantum advantage potentially offered by an ideal (noiseless) implementation of these circuits. Although the technique of quantum error correction (QEC) allows to correct these errors very accurately, QEC usually requires a large overhead of physical qubits which is not reachable with currently available quantum hardware. This has been the motivation behind the field of quantum error mitigation, which aims at developing techniques to correct an important part of the errors in quantum circuits, while also being compatible with current and near-term quantum hardware. In this work, we present a technique for quantum error mitigation which is based on a technique from quantum verification, the so-called accreditation protocol, together with post-selection. Our technique allows for correcting the expectation value of an observable $O$, which is the output of multiple runs of noisy quantum circuits, where the noise in these circuits is at the level of preparations, gates, and measurements. We discuss the sample complexity of our procedure and provide rigorous guarantees of errors being mitigated under some realistic assumptions on the noise. Our technique also allows for time dependant behaviours, as we allow for the output states to be different between different runs of the accreditation protocol. We validate our findings by running our technique on currently available quantum hardware.

Efficient Construction of Quantum Physical Unclonable Functions with Unitary t-designs

Quantum physical unclonable functions, or QPUFs, are rapidly emerging as theoretical hardware solutions to provide secure cryptographic functionalities such as key-exchange, message authentication, entity identification among others. Recent works have shown that in order to provide provable security of these solutions against any quantum polynomial time adversary, QPUFs are required to be a unitary sampled uniformly randomly from the Haar measure. This however is known to require an exponential amount of resources. In this work, we propose an efficient construction of these devices using unitary t-designs, called QPUF_t. Along the way, we modify the existing security definitions of QPUFs to include efficient constructions and showcase that QPUF_t still retains the provable security guarantees against a bounded quantum polynomial adversary with t-query access to the device. This also provides the first use case of unitary t-design construction for arbitrary t, as opposed to previous applications of t-designs where usually a few (relatively low) values of t are known to be useful for performing some task. We study the noise-resilience of QPUF_t against specific types of noise, unitary noise, and show that some resilience can be achieved particularly when the error rates affecting individual qubits become smaller as the system size increases. To make the noise-resilience more realistic and meaningful, we conclude that some notion of error mitigation or correction should be introduced.

A Unified Framework For Quantum Unforgeability

In this paper, we continue the line of work initiated by Boneh and Zhandry at CRYPTO 2013 and EUROCRYPT 2013 in which they formally define the notion of unforgeability against quantum adversaries specifically, for classical message authentication codes and classical digital signatures schemes. We develop a general and parameterised quantum game-based security model unifying unforgeability for both classical and quantum constructions allowing us for the first time to present a complete quantum cryptanalysis framework for unforgeability. In particular, we prove how our definitions subsume previous ones while considering more fine-grained adversarial models, capturing the full spectrum of superposition attacks. The subtlety here resides in the characterisation of a forgery. We show that the strongest level of unforgeability, namely existential unforgeability, can only be achieved if only orthogonal to previously queried messages are considered to be forgeries. In particular, we present a non-trivial attack if any overlap between the forged message and previously queried ones is allowed. We further show that deterministic constructions can only achieve the weaker notion of unforgeability, that is selective unforgeability, against such restricted adversaries, but that selective unforgeability breaks if general quantum adversaries (capable of general superposition attacks) are considered. On the other hand, we show that PRF is sufficient for constructing a selective unforgeable classical primitive against full quantum adversaries. Moreover, we show similar positive results relying on Pseudorandom Unitaries (PRU) for quantum primitives. These results demonstrate the generality of our framework that could be applicable to other primitives beyond the cases analysed in this paper.

Analysis of satellite-to-ground quantum key distribution with adaptive optics

Future quantum communication infrastructures will rely on both terrestrial and space-based links integrating high-performance optical systems engineered for this purpose. In space-based downlinks in particular, the loss budget and the variations in the signal propagation due to atmospheric turbulence effects impose a careful optimization of the coupling of light in single-mode fibers required for interfacing with the receiving stations and the ground networks. In this work, we perform a comprehensive study of the role of adaptive optics (AO) in this optimization, focusing on realistic baseline configurations of prepare-and-measure quantum key distribution (QKD), with both discrete and continuous-variable encoding, and including finite-size effects. Our analysis uses existing experimental turbulence datasets at both day and night time to model the coupled signal statistics following a wavefront distortion correction with AO, and allows us to estimate the secret key rate for a range of critical parameters, such as turbulence strength, satellite altitude and ground telescope diameter. The results we derive illustrate the interest of adopting advanced AO techniques in several practical configurations.

QEnclave -- A practical solution for secure quantum cloud computing

We introduce a secure hardware device named a QEnclave that can secure the remote execution of quantum operations while only using classical controls. This device extends to quantum computing the classical concept of a secure enclave which isolates a computation from its environment to provide privacy and tamper-resistance. Remarkably, our QEnclave only performs single-qubit rotations, but can nevertheless be used to secure an arbitrary quantum computation even if the qubit source is controlled by an adversary. More precisely, attaching a QEnclave to a quantum computer, a remote client controlling the QEnclave can securely delegate its computation to the server solely using classical communication. We investigate the security of our QEnclave by modeling it as an ideal functionality named Remote State Rotation. We show that this resource, similar to previously introduced functionality of remote state preparation, allows blind delegated quantum computing with perfect security. Our proof relies on standard tools from delegated quantum computing. Working in the Abstract Cryptography framework, we show a construction of remote state preparation from remote state rotation preserving the security. An immediate consequence is the weakening of the requirements for blind delegated computation. While previous delegated protocols were relying on a client that can either generate or measure quantum states, we show that this same functionality can be achieved with a client that only transforms quantum states without generating or measuring them.

Hybrid PUF: A Novel Way to Enhance the Security of Classical PUFs

Physical unclonable functions provide a unique ‘fingerprint’ to a physical entity by exploiting the inherent physical randomness. With the help of quantum information theory, this paper proposes solutions to protect PUFs against machine learning-based attacks. Here, based on the querying capability, we first divide the adversaries into two classes, namely adaptive and weak adversaries. We also modify an existing security notion, universal unforgeability, to capture the power of those two classes of adversaries. We then introduce the notion of a hybrid PUF, using a classical PUF and quantum conjugate coding. This construction encodes the output of a classical PUF in non-orthogonal quantum states. We show that the indistinguishability of those states can significantly enhance the security of the classical PUFs against weak adversaries. Moreover, we show that learning the underlying classical PUF from the outputs of our HPUF construction is at least as hard as learning the classical PUF from its random noisy outputs. To prevent the adversaries from querying the PUFs adaptively, we borrow ideas from a classical lockdown technique and apply them to our hybrid PUF. We show that the hybrid PUFs, together with the lockdown technique, termed as hybrid locked PUF, can provide a secure client authentication protocol against adaptive adversaries and are implementable with the current day quantum communication technology. Moreover, we show that HLPUF allows the server to reuse the challenges for further client authentication, providing an efficient solution for running a PUF-based client authentication protocol for a longer period while maintaining a small-sized challenge-response pairs database on the server-side. Finally, we explore the lockdown technique with quantum PUF and show that the direct adaptation of the classical lockdown technique will not work with the fully quantum PUFs.

The Interplay between Quantum Contextuality and Wigner Negativity

The use of quantum information in technology promises to supersede the so-called classical devices used nowadays. Understanding what features are inherently non-classical is crucial for reaching better-than-classical performance. This thesis focuses on two nonclassical behaviours: quantum contextuality and Wigner negativity. The former is a notion superseding nonlocality that can be exhibited by quantum systems. To date, it has mostly been studied in discrete-variable scenarios. In those scenarios, contextuality has been shown to be necessary and sufficient for advantages in some cases. On the other hand, negativity of the Wigner function is another unsettling non-classical feature of quantum states that originates from phase-space formulation in continuous-variable quantum optics. Continuous-variable scenarios offer promising candidates for implementing quantum computations. Wigner negativity is known to be a necessary resource for quantum speedup with continuous variables. However contextuality has been little understood and studied in continuous-variable scenarios. We first set out a robust framework for properly treating contextuality in continuous variables. We also quantify contextuality in such scenarios by using tools from infinite-dimensional optimisation theory. Building upon this, we show that Wigner negativity is equivalent to contextuality in continuous variables with respect to Pauli measurements thus establishing a continuous-variable analogue of a celebrated result by Howard et al. We then introduce experimentally-friendly witnesses for Wigner negativity of single mode and multimode quantum states, based on fidelities with Fock states, using again tools from infinite-dimensional optimisation theory. We further extend the range of previously known discrete-variable results linking contextuality and advantage into a new territory of information retrieval.

The interplay between quantum contextuality and Wigner negativity

Quantum physics has revolutionised our way of conceiving nature and is now bringing about a new technological revolution. The use of quantum information in technology promises to supersede the so-called classical devices used nowadays. Understanding what features are inherently non-classical is crucial for reaching better-than-classical performance. This thesis focuses on two nonclassical behaviours: quantum contextuality and Wigner negativity. To date, contextuality has mostly been studied in discrete-variable scenarios, where observables take values in discrete and usually finite sets. In those scenarios, contextuality has been shown to be necessary and sufficient for advantages in some cases. On the other hand, negativity of the Wigner function is another unsettling non-classical feature of quantum states that originates from phase-space formulation in quantum optics. Wigner negativity is known to be a necessary resource for quantum speedup. We set out a robust framework for properly treating contextuality in continuous variables. We quantify contextuality in such scenarios by using tools from infinite-dimensional optimisation theory. Building upon this, we show that Wigner negativity is equivalent to contextuality in continuous variables with respect to Pauli measurements. We then introduce experimentally-friendly witnesses for Wigner negativity of multimode quantum states, based on fidelities with Fock states which again uses infinite-dimensional linear programming techniques. We further extend the range of previously known discrete-variable results linking contextuality and advantage into a new territory of discrete variable information retrieval.

Flexible entanglement-distribution network with an AlGaAs chip for secure communications

Abstract Quantum communication networks enable applications ranging from highly secure communication to clock synchronization and distributed quantum computing. Miniaturized, flexible, and cost-efficient resources will be key elements for ensuring the scalability of such networks as they progress towards large-scale deployed infrastructures. Here, we bring these elements together by combining an on-chip, telecom-wavelength, broadband entangled photon source with industry-grade flexible-grid wavelength division multiplexing techniques, to demonstrate reconfigurable entanglement distribution between up to 8 users in a resource-optimized quantum network topology. As a benchmark application we use quantum key distribution, and show low error and high secret key generation rates across several frequency channels, over both symmetric and asymmetric metropolitan-distance optical fibered links and including finite-size effects. By adapting the bandwidth allocation to specific network constraints, we also illustrate the flexible networking capability of our configuration. Together with the potential of our semiconductor source for distributing secret keys over a 60 nm bandwidth with commercial multiplexing technology, these results offer a promising route to the deployment of scalable quantum network architectures.

Multi-Party Quantum Cryptography : from Folklore to Real-World

Quantum cryptography builds upon decades of advances both in classical cryptography and networks. However, contrary to its classical counterparts, it is still in its infancy applicability-wise, even in the scenario where powerful quantum computers are readily available, and more theoretical work is required before it can provide concrete benefits. The first goal is to formalise in rigorous quantum security frameworks the properties of various techniques that have been transposed, often without proper justification, from the classical world.Then, the recent developments in quantum technologies suggest a mostly cloud-based future availability of quantum devices. Therefore, quantum computation and communication cost of protocol participants must be lowered before being useful.Finally, in most situations, additional steps need to be taken to tailor protocols to the specifications of devices. This allows for optimisations both in terms of quantum memory and operation requirements.This thesis contributes to these three aspects by: (i) giving the first general security definition of the Quantum Cut-and-Choose, a technique for proving the correctness of a quantum message; (ii) presenting a more realistic framework of security against superposition attacks, where classical protocols run on inherently quantum devices; (iii) constructing an efficient delegated multi-party quantum computation protocol, allowing clients to delegate securely to a quantum server a private computation; (iv) building a method for verifying the honesty of a quantum server performing computations on behalf of a client with no operation or memory overhead compared to the unprotected computation.

Quantum technologies in space

Recently, the European Commission supported by many European countries has announced large investments towards the commercialization of quantum technology (QT) to address and mitigate some of the biggest challenges facing today’s digital erae.g. secure communication and computing power. For more than two decades the QT community has been working on the development of QTs, which promise landmark breakthroughs leading to commercialization in various areas. The ambitious goals of the QT community and expectations of EU authorities cannot be met solely by individual initiatives of single countries, and therefore, require a combined European effort of large and unprecedented dimensions comparable only to the Galileo or Copernicus programs. Strong international competition calls for a coordinated European effort towards the development of QT in and for space, including research and development of technology in the areas of communication and sensing. Here, we aim at summarizing the state of the art in the development of quantum technologies which have an impact in the field of space applications. Our goal is to outline a complete framework for the design, development, implementation, and exploitation of quantum technology in space.

Witnessing Wigner Negativity

Negativity of the Wigner function is arguably one of the most striking non-classical features of quantum states. Beyond its fundamental relevance, it is also a necessary resource for quantum speedup with continuous variables. As quantum technologies emerge, the need to identify and characterize the resources which provide an advantage over existing classical technologies becomes more pressing. Here we derive witnesses for Wigner negativity of quantum states, based on fidelities with Fock states, which can be reliably measured using standard detection setups. They possess a threshold expected value indicating whether the measured state exhibits the desired property or not. We phrase the problem of finding the threshold values for our witnesses as an infinite-dimensional linear optimisation. By relaxing and restricting the corresponding linear programs, we derive two hierarchies of semidefinite programs, which provide numerical sequences of increasingly tighter upper and lower bounds for the threshold values. We further show that both sequences converge to the threshold value. Moreover, our witnesses form a complete family - each Wigner negative state is detected by at least one witness - thus providing a reliable method for experimentally witnessing Wigner negativity of quantum states from few measurements. From a foundational perspective, our work provides insights on the set of positive Wigner functions which still lacks a proper characterisation.

Composable Security for Multipartite Entanglement Verification

We present a composably secure protocol allowing $n$ parties to test an entanglement generation resource controlled by a possibly dishonest party. The test consists only in local quantum operations and authenticated classical communication once a state is shared among them and provides composable security, namely it can be used as a secure subroutine by $n$ honest parties within larger communication protocols to test if a source is sharing quantum states that are at least $\epsilon$-close to the GHZ state. This claim comes on top of previous results on multipartite entanglement verification where the security was studied in the usual game-based model. Here, we improve the protocol to make it more suitable for practical use in a quantum network and we study its security in the Abstract Cryptography framework to highlight composability issues and avoid hidden assumptions. This framework is a top-to-bottom theory that makes explicit any piece of information that each component (party or resource) gets at every time-step of the protocol. Moreover any security proof, which amounts to showing indistinguishability between an ideal resource having the desired security properties (up to local simulation) and the concrete resource representing the protocol, is composable for free in this setting. This allows us to readily compose our basic protocol in order to create a composably secure multi-round protocol enabling honest parties to obtain a state close to a GHZ state or an abort signal, even in the presence of a noisy or malicious source. Our protocol can typically be used as a subroutine in a Quantum Internet, to securely share a GHZ state among the network before performing a communication or computation protocol.

Delegating Multi-Party Quantum Computations vs. Dishonest Majority in Two Quantum Rounds

Multi-Party Quantum Computation (MPQC) has attracted a lot of attention as a potential killer-app for quantum networks through it’s ability to preserve privacy and integrity of the highly valuable computations they would enable. Contributing to the latest challenges in this field, we present a composable protocol achieving blindness and verifiability even in the case of a single honest client. The security of our protocol is reduced, in an information-theoretically secure way, to that of a classical composable Secure Multi-Party Computation (SMPC) used to coordinate the various parties. Our scheme thus provides a statistically secure upgrade of such classical scheme to a quantum one with the same level of security. In addition, (i) the clients can delegate their computation to a powerful fully fault-tolerant server and only need to perform single qubit operations to unlock the full potential of multi-party quantum computation; (ii) the amount of quantum communication with the server is reduced to sending quantum states at the beginning of the computation and receiving the output states at the end, which is optimal and removes the need for interactive quantum communication; and (iii) it has a low constant multiplicative qubit overhead compared to the single-client delegated protocol it is built upon. The main technical ingredient of our paper is the bootstraping of the MPQC construction by Double Blind Quantum Computation, a new composable resource for blind multiparty quantum computation, that demonstrates the surprising fact that the full protocol does not require verifiability of all components to achieve security.

Continuous-variable quantum cryptographic protocols

This thesis is concerned with the study and analysis of two quantum cryptographic protocols: quantum key distribution (QKD) and unforgeable quantum money in the continuous-variable (CV) framework. The main advantage of CV protocols is that their implementation only requires standard telecom components. QKD allows two distant parties, Alice and Bob, to establish a secure key, even in the presence of an eavesdropper, Eve. The remarkable property of QKD is that its security can be established in the information-theoretic setting, without appealing to any computational assumptions. Proving the security of CV-QKD protocols is challenging since the protocols are described in an infinite-dimensional Fock space. One of the open questions in CV-QKD was establishing security for two-way QKD protocols against general attacks. We exploit the invariance of Unitary group U(n) of the protocol to establish composable security against general attacks. We answer another pressing question in the field of CV-QKD with a discrete modulation by establishing the asymptotic security of such protocols against collective attacks. We provide a general technique to derive a lower bound on the secret key rate by formulating the problem as a semidefinite program. Quantum money exploits the no-cloning property of quantum mechanics to generate unforgeable tokens, banknotes, and credit cards. We propose a CV private-key quantum money scheme with classical verification. The motivation behind this protocol is to facilitate the process of practical implementation. Previous classical verification money schemes use single-photon detectors for verification, while our protocols use coherent detection.

Experimental demonstration of quantum advantage for NP verification with limited information

In recent years, many computational tasks have been proposed as candidates for showing a quantum computational advantage, that is an advantage in the time needed to perform the task using a quantum instead of a classical machine. Nevertheless, practical demonstrations of such an advantage remain particularly challenging because of the difficulty in bringing together all necessary theoretical and experimental ingredients. Here, we show an experimental demonstration of a quantum computational advantage in a prover-verifier interactive setting, where the computational task consists in the verification of an NP-complete problem by a verifier who only gets limited information about the proof sent by an untrusted prover in the form of a series of unentangled quantum states. We provide a simple linear optical implementation that can perform this verification task efficiently (within a few seconds), while we also provide strong evidence that, fixing the size of the proof, a classical computer would take much longer time (assuming only that it takes exponential time to solve an NP-complete problem). While our computational advantage concerns a specific task in a scenario of mostly theoretical interest, it brings us a step closer to potential useful applications, such as server-client quantum computing.

The Quantum Cut-and-Choose Technique and Quantum Two-Party Computation

The application and analysis of the Cut-and-Choose technique in protocols secure against quantum adversaries is not a straightforward transposition of the classical case, among other reasons due to the difficulty to use rewinding in the quantum realm. We introduce a Quantum Computation Cut-and-Choose (QC-CC) technique which is a generalisation of the classical Cut-and-Choose in order to build quantum protocols secure against quantum covert adversaries. Such adversaries can deviate arbitrarily provided that their deviation is not detected. As an application of the QC-CC we give a protocol for securely performing two-party quantum computation with classical input/output. As basis we use secure delegated quantum computing (Broadbent et al 2009), and in particular the garbled quantum computation of (Kashefi et al 2016) that is secure against only a weak specious adversaries, defined in (Dupuis et al 2010). A unique property of these protocols is the separation between classical and quantum communications and the asymmetry between client and server, which enables us to sidestep the quantum rewinding issues. This opens the prospect of using the QC-CC to other quantum protocols with this separation. In our proof of security we adapt and use (at different parts) two quantum rewinding techniques, namely Watrous’ oblivious q-rewinding (Watrous 2009) and Unruh’s special q-rewinding (Unruh 2012). Our protocol achieves the same functionality as in previous works (e.g. Dupuis et al 2012), however using the QC-CC technique on the protocol from (Kashefi et al 2016) leads to the following key improvements: (i) only one-way offline quantum communication is necessary , (ii) only one party (server) needs to have involved quantum technological abilities, (iii) only minimal extra cryptographic primitives are required, namely one oblivious transfer for each input bit and quantum-safe commitments.

Two combinatorial MA-complete problems

Despite the interest in the complexity class MA, the randomized analog of NP, just a few natural MA-complete problems are known. The first problem was found by (Bravyi and Terhal, SIAM Journal of Computing 2009); it was then followed by (Crosson, Bacon and Brown, PRE 2010) and (Bravyi, Quantum Information and Computation 2015). Surprisingly, two of these problems are defined using terminology from quantum computation, while the third is inspired by quantum computation and keeps a physical terminology. This prevents classical complexity theorists from studying these problems, delaying potential progress, e.g., on the NP vs. MA question. Here, we define two new combinatorial problems and prove their MA-completeness. The first problem, ACAC, gets as input a succinctly described graph, with some marked vertices. The problem is to decide whether there is a connected component with only unmarked vertices, or the graph is far from having this property. The second problem, SetCSP, generalizes standard constraint satisfaction problem (CSP) into constraints involving sets of strings. Technically, our proof that SetCSP is MA-complete is based on an observation by (Aharonov and Grilo, FOCS 2019), in which it was noted that a restricted case of Bravyi and Terhal’s problem (namely, the uniform case) is already MA-complete; a simple trick allows to state this restricted case using combinatorial language. The fact that the first, more natural, problem of ACAC is MA-hard follows quite naturally from this proof, while the containment of ACAC in MA is based on the theory of random walks. We notice that the main result of Aharonov and Grilo carries over to the SetCSP problem in a straightforward way, implying that finding a gap-amplification procedure for SetCSP (as in Dinur’s PCP proof) is equivalent to MA=NP. This provides an alternative new path towards the major problem of derandomizing MA.

Variational Quantum Cloning: Improving Practicality for Quantum Cryptanalysis

Cryptanalysis on standard quantum cryptographic systems generally involves finding optimal adversarial attack strategies on the underlying protocols. The core principle of modelling quantum attacks in many cases reduces to the adversary’s ability to clone unknown quantum states which facilitates the extraction of some meaningful secret information. Explicit optimal attack strategies typically require high computational resources due to large circuit depths or, in many cases, are unknown. In this work, we propose variational quantum cloning (VQC), a quantum machine learning based cryptanalysis algorithm which allows an adversary to obtain optimal (approximate) cloning strategies with short depth quantum circuits, trained using hybrid classical-quantum techniques. The algorithm contains operationally meaningful cost functions with theoretical guarantees, quantum circuit structure learning and gradient descent based optimisation. Our approach enables the end-to-end discovery of hardware efficient quantum circuits to clone specific families of quantum states, which in turn leads to an improvement in cloning fidelites when implemented on quantum hardware: the Rigetti Aspen chip. Finally, we connect these results to quantum cryptographic primitives, in particular quantum coin flipping. We derive attacks on two protocols as examples, based on quantum cloning and facilitated by VQC. As a result, our algorithm can improve near term attacks on these protocols, using approximate quantum cloning as a resource.

Secure Quantum Two-Party Computation: Impossibility and Constructions

Secure two-party computation considers the problem of two parties computing a joint function of their private inputs without revealing anything beyond the output of the computation. In this work, we take the first steps towards understanding the setting in which the two parties want to evaluate a joint quantum functionality while using only a classical channel between them. Our first result indicates that it is in general impossible to realize a two-party quantum functionality against malicious adversaries with black-box simulation, relying only on classical channels. The negative result stems from reducing the existence of a black-box simulator to an extractor for classical proof of quantum knowledge, which in turn leads to violation of the quantum no-cloning. Next, we introduce the notion of oblivious quantum function evaluation (OQFE). An OQFE is a two-party quantum cryptographic primitive with one fully classical party (Alice) whose input is (a classical description of a) quantum unitary, $U$, and a quantum party (Bob) whose input is a quantum state, $\psi$. In particular, Alice receives a classical output corresponding to the measurement of $U(\psi)$ while Bob receives no output. In OQFE, Bob remains oblivious to Alice’s input, while Alice learns nothing about $\psi$ more than what can be learned from the output. We present two constructions, one secure against semi-honest parties and the other against malicious parties. Due to the no-go result mentioned above, we consider what is arguably the best possible notion obtainable in our model concerning malicious adversaries: one-sided simulation security. Our protocol relies on the assumption of injective homomorphic trapdoor OWFs, which in turn rely on the LWE problem. As a result, we put forward a first, simple and modular, construction of one-sided quantum two-party computation and quantum oblivious transfer over classical networks.

Qualifying quantum approaches for hard industrial optimization problems. A case study in the field of smart-charging of electric vehicles

In order to qualify quantum algorithms for industrial NP-Hard problems, comparing them to available polynomial approximate classical algorithms and not only to exact ones – exponential by nature – , is necessary. This is a great challenge as, in many cases, bounds on the reachable approximation ratios exist according to some highly-trusted conjectures of Complexity Theory. An interesting setup for such qualification is thus to focus on particular instances of these problems known to be “less difficult” than the worst-case ones and for which the above bounds can be outperformed: quantum algorithms should perform at least as well as the conventional approximate ones on these instances, up to very large sizes. We present a case study of such a protocol for two industrial problems drawn from the strongly developing field of smart-charging of electric vehicles. Tailored implementations of the Quantum Approximate Optimization Algorithm (QAOA) have been developed for both problems, and tested numerically with classical resources either by emulation of Pasqal’s Rydberg atom based quantum device or using Atos Quantum Learning Machine. In both cases, quantum algorithms exhibit the same approximation ratios than conventional approximation algorithms, or improve them. These are very encouraging results, although still for instances of limited size as allowed by studies on classical computing resources. The next step will be to confirm them on larger instances, on actual devices, and for more complex versions of the problems addressed.

Client-Server Identification Protocols with Quantum PUF

Recently, major progress has been made towards the realisation of the quantum internet to enable a broad range of applications that would be out of reach for classical internet. Most of these applications such as delegated quantum computation require running a secure identification protocol between a low-resource and a high-resource party to provide secure communication. Physical Unclonable Functions (PUFs) have been shown as resource-efficient hardware solutions for providing secure identification schemes in both classical and quantum settings. In this work, we propose two identification protocols based on quantum PUFs (qPUFs) as defined by Arapinis et al. In the first protocol, the low-resource party wishes to prove its identity to the high-resource party and in the second protocol, it is vice versa. Unlike existing identification protocols based on Quantum Read-out PUFs which rely on the security against a specific family of attacks, our protocols provide provable exponential security against any Quantum Polynomial-Time (QPT) adversary with resource-efficient parties. We provide a comprehensive comparison between the two proposed protocols in terms of resources such as quantum memory and computing ability required in both parties as well as the communication overhead between them. A stand-out feature of our second protocol is secure identification of a high-resource party by running a purely classical verification algorithm. This is achieved by delegating quantum operations to the high-resource party and utilising the resulting classical outcomes for identification.

Feasibility of satellite-to-ground continuous-variable quantum key distribution

Establishing secure communication links at a global scale is a major potential application of quantum information science but also extremely challenging for the underlying technology. While milestone experiments using satellite-to-ground links and exploiting singe-photon encoding for implementing quantum key distribution have shown recently that this goal is achievable, it is still necessary to further investigate practical solutions compatible with classical optical communication systems. Here we examine the feasibility of establishing secret keys in a satellite-to-ground downlink configuration using continuous-variable encoding, which can be implemented using standard telecommunication components certified for space environment and able to operate at high symbol rates. Considering a realistic channel model and state-of-the-art technology, and exploiting an orbit subdivision technique for mitigating fluctuations in the transmission efficiency, we find positive secret key rates for a low-Earth-orbit scenario, while finite-size effects can be a limiting factor for higher orbits. Our analysis determines regions of values for important experimental parameters where secret key exchange is possible and can be used as a guideline for experimental efforts in this direction.

Quantum Technology for Economists

Research on quantum technology spans multiple disciplines: physics, computer science, engineering, and mathematics. The objective of this manuscript is to provide an accessible introduction to this emerging field for economists that is centered around quantum computing and quantum money. We proceed in three steps. First, we discuss basic concepts in quantum computing and quantum communication, assuming knowledge of linear algebra and statistics, but not of computer science or physics. This covers fundamental topics, such as qubits, superposition, entanglement, quantum circuits, oracles, and the no-cloning theorem. Second, we provide an overview of quantum money, an early invention of the quantum communication literature that has recently been partially implemented in an experimental setting. One form of quantum money offers the privacy and anonymity of physical cash, the option to transact without the involvement of a third party, and the efficiency and convenience of a debit card payment. Such features cannot be achieved in combination with any other form of money. Finally, we review all existing quantum speedups that have been identified for algorithms used to solve and estimate economic models. This includes function approximation, linear systems analysis, Monte Carlo simulation, matrix inversion, principal component analysis, linear regression, interpolation, numerical differentiation, and true random number generation. We also discuss the difficulty of achieving quantum speedups and comment on common misconceptions about what is achievable with quantum computing.

Security Limitations of Classical-Client Delegated Quantum Computing

Secure delegated quantum computing allows a computationally weak client to outsource an arbitrary quantum computation to an untrusted quantum server in a privacy-preserving manner. One of the promising candidates to achieve classical delegation of quantum computation is classical-client remote state preparation ($RSP_{CC}$), where a client remotely prepares a quantum state using a classical channel. However, the privacy loss incurred by employing $RSP_{CC}$ as a sub-module is unclear. In this work, we investigate this question using the Constructive Cryptography framework by Maurer and Renner (ICS'11). We first identify the goal of $RSP_{CC}$ as the construction of ideal RSP resources from classical channels and then reveal the security limitations of using $RSP_{CC}$. First, we uncover a fundamental relationship between constructing ideal RSP resources (from classical channels) and the task of cloning quantum states. Any classically constructed ideal RSP resource must leak to the server the full classical description (possibly in an encoded form) of the generated quantum state, even if we target computational security only. As a consequence, we find that the realization of common RSP resources, without weakening their guarantees drastically, is impossible due to the no-cloning theorem. Second, the above result does not rule out that a specific $RSP_{CC}$ protocol can replace the quantum channel at least in some contexts, such as the Universal Blind Quantum Computing (UBQC) protocol of Broadbent et al. (FOCS ‘09). However, we show that the resulting UBQC protocol cannot maintain its proven composable security as soon as $RSP_{CC}$ is used as a subroutine. Third, we show that replacing the quantum channel of the above UBQC protocol by the $RSP_{CC}$ protocol QFactory of Cojocaru et al. (Asiacrypt ‘19), preserves the weaker, game-based, security of UBQC.

Quantum statistical query learning

We propose a learning model called the quantum statistical learning QSQ model, which extends the SQ learning model introduced by Kearns to the quantum setting. Our model can be also seen as a restriction of the quantum PAC learning model: here, the learner does not have direct access to quantum examples, but can only obtain estimates of measurement statistics on them. Theoretically, this model provides a simple yet expressive setting to explore the power of quantum examples in machine learning. From a practical perspective, since simpler operations are required, learning algorithms in the QSQ model are more feasible for implementation on near-term quantum devices. We prove a number of results about the QSQ learning model. We first show that parity functions, (log n)-juntas and polynomial-sized DNF formulas are efficiently learnable in the QSQ model, in contrast to the classical setting where these problems are provably hard. This implies that many of the advantages of quantum PAC learning can be realized even in the more restricted quantum SQ learning model. It is well-known that weak statistical query dimension, denoted by WSQDIM(C), characterizes the complexity of learning a concept class C in the classical SQ model. We show that log(WSQDIM(C)) is a lower bound on the complexity of QSQ learning, and furthermore it is tight for certain concept classes C. Additionally, we show that this quantity provides strong lower bounds for the small-bias quantum communication model under product distributions. Finally, we introduce the notion of private quantum PAC learning, in which a quantum PAC learner is required to be differentially private. We show that learnability in the QSQ model implies learnability in the quantum private PAC model. Additionally, we show that in the private PAC learning setting, the classical and quantum sample complexities are equal, up to constant factors.

Quantum learning algorithms imply circuit lower bounds

We establish the first general connection between the design of quantum algorithms and circuit lower bounds. Specifically, let $\mathfrak{C}$ be a class of polynomial-size concepts, and suppose that $\mathfrak{C}$ can be PAC-learned with membership queries under the uniform distribution with error $1/2 - \gamma$ by a time $T$ quantum algorithm. We prove that if $\gamma^2 \cdot T \ll 2^n/n$, then $\mathsf{BQE} \nsubseteq \mathfrak{C}$, where $\mathsf{BQE} = \mathsf{BQTIME}[2^{O(n)}]$ is an exponential-time analogue of $\mathsf{BQP}$. This result is optimal in both $\gamma$ and $T$, since it is not hard to learn any class $\mathfrak{C}$ of functions in (classical) time $T = 2^n$ (with no error), or in quantum time $T = \mathsf{poly}(n)$ with error at most $1/2 - \Omega(2^{-n/2})$ via Fourier sampling. In other words, even a marginal improvement on these generic learning algorithms would lead to major consequences in complexity theory. Our proof builds on several works in learning theory, pseudorandomness, and computational complexity, and crucially, on a connection between non-trivial classical learning algorithms and circuit lower bounds established by Oliveira and Santhanam (CCC 2017). Extending their approach to quantum learning algorithms turns out to create significant challenges. To achieve that, we show among other results how pseudorandom generators imply learning-to-lower-bound connections in a generic fashion, construct the first conditional pseudorandom generator secure against uniform quantum computations, and extend the local list-decoding algorithm of Impagliazzo, Jaiswal, Kabanets and Wigderson (SICOMP 2010) to quantum circuits via a delicate analysis. We believe that these contributions are of independent interest and might find other applications.

Dispelling Myths on Superposition Attacks: Formal Security Model and Attack Analyses

It is of folkloric belief that the security of classical cryptographic protocols is automatically broken if the Adversary is allowed to perform superposition queries and the honest players forced to perform actions coherently on quantum states. Another widely held intuition is that enforcing measurements on the exchanged messages is enough to protect protocols from these attacks. However, the reality is much more complex. Security models dealing with superposition attacks only consider unconditional security. Conversely, security models considering computational security assume that all supposedly classical messages are measured, which forbids by construction the analysis of superposition attacks. Boneh and Zhandry have started to study the quantum computational security for classical primitives in their seminal work at Crypto'13, but only in the single-party setting. To the best of our knowledge, an equivalent model in the multiparty setting is still missing. In this work, we propose the first computational security model considering superposition attacks for multiparty protocols. We show that our new security model is satisfiable by proving the security of the well-known One-Time-Pad protocol and give an attack on a variant of the equally reputable Yao Protocol for Secure Two-Party Computations. The post-mortem of this attack reveals the precise points of failure, yielding highly counter-intuitive results: Adding extra classical communication, which is harmless for classical security, can make the protocol become subject to superposition attacks. We use this newly imparted knowledge to construct the first concrete protocol for Secure Two-Party Computation that is resistant to superposition attacks. Our results show that there is no straightforward answer to provide for either the vulnerabilities of classical protocols to superposition attacks or the adapted countermeasures.

Securing Quantum Computations in the NISQ Era

Recent experimental achievements motivate an ever-growing interest from companies starting to feel the limitations of classical computing. Yet, in light of ongoing privacy scandals, the future availability of quantum computing through remotely accessible servers pose peculiar challenges: Clients with quantum-limited capabilities want their data and algorithms to remain hidden, while being able to verify that their computations are performed correctly. Research in blind and verifiable delegation of quantum computing attempts to address this question. However, available techniques suffer not only from high overheads but also from over-sensitivity: When running on noisy devices, imperfections trigger the same detection mechanisms as malicious attacks, resulting in perpetually aborted computations. Hence, while malicious quantum computers are rendered harmless by blind and verifiable protocols, inherent noise severely limits their usability. We address this problem with an efficient, robust, blind, verifiable scheme to delegate deterministic quantum computations with classical inputs and outputs. We show that: 1) a malicious Server can cheat at most with an exponentially small success probability; 2) in case of sufficiently small noise, the protocol succeeds with a probability exponentially close to 1; 3) the overhead is barely a polynomial number of repetitions of the initial computation interleaved with test runs requiring the same physical resources in terms of memory and gates; 4) the amount of tolerable noise, measured by the probability of failing a test run, can be as high as 25% for some computations and will be generally bounded by 12.5% when using a planar graph resource state. The key points are that security can be provided without universal computation graphs and that, in our setting, full fault-tolerance is not needed to amplify the confidence level exponentially close to 1.

Non-interactive classical verification of quantum computation

In a recent breakthrough, Mahadev constructed an interactive protocol that enables a purely classical party to delegate any quantum computation to an untrusted quantum prover. In this work, we show that this same task can in fact be performed non-interactively and in zero-knowledge. Our protocols result from a sequence of significant improvements to the original four-message protocol of Mahadev. We begin by making the first message instance-independent and moving it to an offline setup phase. We then establish a parallel repetition theorem for the resulting three-message protocol, with an asymptotically optimal rate. This, in turn, enables an application of the Fiat-Shamir heuristic, eliminating the second message and giving a non-interactive protocol. Finally, we employ classical non-interactive zero-knowledge (NIZK) arguments and classical fully homomorphic encryption (FHE) to give a zero-knowledge variant of this construction. This yields the first purely classical NIZK argument system for QMA, a quantum analogue of NP. We establish the security of our protocols under standard assumptions in quantum-secure cryptography. Specifically, our protocols are secure in the Quantum Random Oracle Model, under the assumption that Learning with Errors is quantumly hard. The NIZK construction also requires circuit-private FHE.

Fault-tolerant quantum speedup from constant depth quantum circuits

A defining feature in the field of quantum computing is the potential of a quantum device to outperform its classical counterpart for a specific computational task. By now, several proposals exist showing that certain sampling problems can be done efficiently quantumly, but are not possible efficiently classically, assuming strongly held conjectures in complexity theory. A feature dubbed quantum speedup. However, the effect of noise on these proposals is not well understood in general, and in certain cases it is known that simple noise can destroy the quantum speedup. Here we develop a fault-tolerant version of one family of these sampling problems, which we show can be implemented using quantum circuits of constant depth. We present two constructions, each taking $poly(n)$ physical qubits, some of which are prepared in noisy magic states. The first of our constructions is a constant depth quantum circuit composed of single and two-qubit nearest neighbour Clifford gates in four dimensions. This circuit has one layer of interaction with a classical computer before final measurements. Our second construction is a constant depth quantum circuit with single and two-qubit nearest neighbour Clifford gates in three dimensions, but with two layers of interaction with a classical computer before the final measurements. For each of these constructions, we show that there is no classical algorithm which can sample according to its output distribution in $poly(n)$ time, assuming two standard complexity theoretic conjectures hold. The noise model we assume is the so-called local stochastic quantum noise. Along the way, we introduce various new concepts such as constant depth magic state distillation (MSD), and constant depth output routing, which arise naturally in measurement based quantum computation (MBQC), but have no constant-depth analogue in the circuit model.

Continuous Variable Quantum Advantages and Applications in Quantum Optics

Quantum physics has led to a revolution in our conception of the nature of our world and is now bringing about a technological revolution. The use of quantum information promises indeed applications that outperform those of today’s so-called classical devices. Continuous variable quantum information theory refers to the study of quantum information encoded in continuous degrees of freedom of quantum systems. This theory extends the mathematical study of quantum information to quantum states in Hilbert spaces of infinite dimension. It offers different perspectives compared to discrete variable quantum information theory and is particularly suitable for the description of quantum states of light. Quantum optics is thus a natural experimental platform for developing quantum applications in continuous variable. This thesis focuses on three main questions: where does a quantum advantage, that is, the ability of quantum machines to outperform classical machines, come from? How to ensure the proper functioning of a quantum machine? What advantages can be gained in practice from the use of quantum information? These three questions are at the heart of the development of future quantum technologies and we provide several answers within the frameworks of continuous variable quantum information and linear quantum optics. Quantum advantage in continuous variable comes in particular from the use of so-called non-Gaussian quantum states. We introduce the stellar formalism to characterize these states. We then study the transition from classically simulable models to models universal for quantum computing. We show that quantum computational supremacy, the dramatic speedup of quantum computers over their classical counterparts, may be realised with non-Gaussian states and Gaussian measurements. Quantum certification denotes the methods seeking to verify the correct functioning of a quantum machine. We consider certification of quantum states in continuous variable, introducing several protocols according to the assumptions made on the tested state. We develop efficient methods for the verification of a large class of multimode quantum states, including the output states of the Boson Sampling model, enabling the experimental verification of quantum supremacy with photonic quantum computing. We give several new examples of practical applications of quantum information in linear quantum optics. Generalising the swap test, we highlight a connection between the ability to distinguish two quantum states and the ability to perform universal programmable quantum measurements, for which we give various implementations in linear optics, based on the use of single photons or coherent states. Finally, we obtain, thanks to linear optics, the first implementation of a quantum protocol for weak coin flipping, a building block for many cryptographic applications.

Continuous variable quantum advantages and applications in quantum optics

Quantum physics has led to a revolution in our conception of the nature of our world and is now bringing about a technological revolution. The use of quantum information promises indeed applications that outperform those of today’s so-called classical devices. Continuous variable quantum information theory refers to the study of quantum information encoded in continuous degrees of freedom of quantum systems. This theory extends the mathematical study of quantum information to quantum states in Hilbert spaces of infinite dimension. It offers different perspectives compared to discrete variable quantum information theory and is particularly suitable for the description of quantum states of light. Quantum optics is thus a natural experimental platform for developing quantum applications in continuous variable. This thesis focuses on three main questions: where does a quantum advantage, that is, the ability of quantum machines to outperform classical machines, come from? How to ensure the proper functioning of a quantum machine? What advantages can be gained in practice from the use of quantum information? These three questions are at the heart of the development of future quantum technologies and we provide several answers within the frameworks of continuous variable quantum information and linear quantum optics.

The Born supremacy: quantum advantage and training of an Ising Born machine

The search for an application of near-term quantum devices is widespread. Quantum machine learning is touted as a potential utilisation of such devices, particularly those out of reach of the simulation capabilities of classical computers. In this work, we study such an application in generative modelling, focussing on a class of quantum circuits known as Born machines. Specifically, we define a subset of this class based on Ising Hamiltonians and show that the circuits encountered during gradient-based training cannot be efficiently sampled from classically up to multiplicative error in the worst case. Our gradient-based training methods use cost functions known as the Sinkhorn divergence and the Stein discrepancy, which have not previously been used in the gradientbased training of quantum circuits, and we also introduce quantum kernels to generative modelling. We show that these methods outperform the previous standard method, which used maximum mean discrepancy (MMD) as a cost function, and achieve this with minimal overhead. Finally, we discuss the ability of the model to learn hard distributions and provide formal definitions for ‘quantum learning supremacy’. We also exemplify the work of this paper by using generative modelling to perform quantum circuit compilation.

Design and implementation of high-performance devices for continuous-variable quantum key distribution

Quantum key distribution (QKD) is one of the first quantum technologies that were able to provide commercially meaningful solutions to the problem of distributing cryptographic keys between trusted parties, guaranteeing long term security. It is now progressing towards technical maturity, by proposing multiple implementation alternatives. In this thesis, we study Continuous-Variables QKD (CV-QKD), which shares many common elements with classical coherent communication systems, and is a good candidate to facilitate the access to QKD for more users.The use of digital signal processing (DSP) techniques typical in classical communications has been only partially exploited in previous CV-QKD implementations. We experimentally implement standard telecommunication techniques like pulse shaping, adaptive filtering and mode recovery in order to improve the quantum secret key rate and optimize the occupied bandwidth.The potential of integration of the components in a photonic integrated circuit (PIC) is another important aspect of CV-QKD. We have tested a silicon photonics PIC integrating a 180º hybrid detector with two germanium photodiodes, showing that measured parameters are compatible with the generation of secret key.One of the most limiting factors of QKD is the performance under lossy channels, which is common in optical fibre for distances in the order of hundred kilometers. The range can be significantly extended using free space communications, and in particular satellites, where the losses at longer distances can be lower than those in fibre. We consider a model for a downlink satellite channel and predict the achievable secret key rates at different altitudes for CV-QKD, resulting in a potentially feasible technology for satellite communications, extending the range to intercontinental distances.

Quantum Physical Unclonable Functions: Possibilities and Impossibilities

Physical Unclonable Functions (PUFs) are physical devices with unique behavior that are hard to clone. A variety of PUF schemes have been considered in theoretical studies as well as practical implementations of several security primitives such as identification and key generation. Recently, the inherent unclonability of quantum states has been exploited for defining (a partial) quantum analogue to classical PUFs (against limited adversaries). There are also a few proposals for quantum implementations of classical optical PUFs. However, none of these attempts provides a comprehensive study of Quantum Physical Unclonable Functions (QPUFs) with quantum cryptographic tools as we present in this paper. We formally define QPUFs, encapsulating all requirements of classical PUFs as well as introducing new ones inherent to the quantum setting such as testability. We develop a quantum game-based security framework for our analysis and define a new class of quantum attacks, called General Quantum Emulation Attack. This class of attacks exploits previously captured valid challenge-response pairs to emulate the action of an unknown quantum transformation on new input. We devise a concrete attack based on an existing quntum emulation algorithm and use it to show that a family of quantum cryptographic primitives that rely on unknown unitary transformations do not provide existential unforgeability while they provide selective unforgeability. Then, we express our results in the case of QPUF as an unknown unitary transformation.

QFactory: classically-instructed remote secret qubits preparation

The functionality of classically-instructed remotely prepared random secret qubits was introduced in (Cojocaru et al 2018) as a way to enable classical parties to participate in secure quantum computation and communications protocols. The idea is that a classical party (client) instructs a quantum party (server) to generate a qubit to the server’s side that is random, unknown to the server but known to the client. Such task is only possible under computational assumptions. In this contribution we define a simpler (basic) primitive consisting of only BB84 states, and give a protocol that realizes this primitive and that is secure against the strongest possible adversary (an arbitrarily deviating malicious server). The specific functions used, were constructed based on known trapdoor one-way functions, resulting to the security of our basic primitive being reduced to the hardness of the Learning With Errors problem. We then give a number of extensions, building on this basic module: extension to larger set of states (that includes non-Clifford states); proper consideration of the abort case; and verifiablity on the module level. The latter is based on “blind self-testing”, a notion we introduced, proved in a limited setting and conjectured its validity for the most general case.

Randomness for quantum information processing

This thesis is focused on the generation and understanding of particular kinds of quantum randomness. Randomness is useful for many tasks in physics and information processing, from randomized benchmarking , to black hole physics , as well demonstrating a so-called quantum speedup , and many other applications. On the one hand we explore how to generate a particular form of random evolution known as a t-design. On the other we show how this can also give instances for quantum speedup - where classical computers cannot simulate the randomness efficiently. We also show that this is still possible in noisy realistic settings. More specifically, this thesis is centered around three main topics. The first of these being the generation of epsilon-approximate unitary t-designs. In this direction, we first show that non-adaptive, fixed measurements on a graph state composed of poly(n,t,log(1/epsilon)) qubits, and with a regular structure (that of a brickwork state) effectively give rise to a random unitary ensemble which is a epsilon-approximate t-design. This work is presented in Chapter 3. Before this work, it was known that non-adaptive fixed XY measurements on a graph state give rise to unitary t-designs , however the graph states used there were of complicated structure and were therefore not natural candidates for measurement based quantum computing (MBQC), and the circuits to make them were complicated. The novelty in our work is showing that t-designs can be generated by fixed, non-adaptive measurements on graph states whose underlying graphs are regular 2D lattices. These graph states are universal resources for MBQC. Therefore, our result allows the natural integration of unitary t-designs, which provide a notion of quantum pseudorandomness which is very useful in quantum algorithms, into quantum algorithms running in MBQC. Moreover, in the circuit picture this construction for t-designs may be viewed as a constant depth quantum circuit, albeit with a polynomial number of ancillas. We then provide new constructions of epsilon-approximate unitary t-designs both in the circuit model and in MBQC which are based on a relaxation of technical requirements in previous constructions. These constructions are found in Chapters 4 and 5.

Methods for Classically Simulating Noisy Networked Quantum Architectures

As research on building scalable quantum computers advances, it is important to be able to certify their correctness. Due to the exponential hardness of classically simulating quantum computation, straight-forward verification via this means fails. However, we can classically simulate small scale quantum computations and hence we are able to test that devices behave as expected in this domain. This constitutes the first step towards obtaining confidence in the anticipated quantum-advantage when we extend to scales that can no longer be simulated. Real devices have restrictions due to their architecture and limitations due to physical imperfections and noise. In this paper we extend the usual ideal simulations by considering those effects. We provide a general methodology and framework for constructing simulations which emulate the physical system. These simulations should provide a benchmark for realistic devices and guide experimental research in the quest for quantum-advantage. To illustrate our methodology we give examples that involve networked architectures and the noise-model of the device developed by the Networked Quantum Information Technologies Hub (NQIT). For our simulations we use, with suitable modification, the classical simulator of Bravyi and Gosset while the specific problems considered belong to the Instantaneous Quantum Polynomial-time class. This class is believed to be hard for classical computational devices, and is regarded as a promising candidate for the first demonstration of quantum-advantage. We first consider a subclass of IQP, defined by Bermejo-Vega et al, involving two-dimensional dynamical quantum simulators, and then general instances of IQP, restricted to the architecture of NQIT.

Complexity-theoretic limitations on blind delegated quantum computation

Blind delegation protocols allow a client to delegate a computation to a server so that the server learns nothing about the input to the computation apart from its size. For the specific case of quantum computation we know that blind delegation protocols can achieve information-theoretic security. In this paper we prove, provided certain complexity-theoretic conjectures are true, that the power of information-theoretically secure blind delegation protocols for quantum computation (ITS-BQC protocols) is in a number of ways constrained. In the first part of our paper we provide some indication that ITS-BQC protocols for delegating $\sf BQP$ computations in which the client and the server interact only classically are unlikely to exist. We first show that having such a protocol with $O(n^d)$ bits of classical communication implies that $\mathsf{BQP} \subset \mathsf{MA/O(n^d)}$. We conjecture that this containment is unlikely by providing an oracle relative to which $\mathsf{BQP} \not\subset \mathsf{MA/O(n^d)}$. We then show that if an ITS-BQC protocol exists with polynomial classical communication and which allows the client to delegate quantum sampling problems, then there exist non-uniform circuits of size $2^{n - \mathsf{\Omega}(n/log(n))}$, making polynomially-sized queries to an $\sf NP^{NP}$ oracle, for computing the permanent of an $n \times n$ matrix. The second part of our paper concerns ITS-BQC protocols in which the client and the server engage in one round of quantum communication and then exchange polynomially many classical messages. First, we provide a complexity-theoretic upper bound on the types of functions that could be delegated in such a protocol, namely $\mathsf{QCMA/qpoly \cap coQCMA/qpoly}$. Then, we show that having such a protocol for delegating $\mathsf{NP}$-hard functions implies $\mathsf{coNP^{NP^{NP}}} \subseteq \mathsf{NP^{NP^{PromiseQMA}}}$.

The Born Supremacy: Quantum Advantage and Training of an Ising Born Machine

The search for an application of near-term quantum devices is widespread. Quantum Machine Learning is touted as a potential utilisation of such devices, particularly those which are out of the reach of the simulation capabilities of classical computers. In this work, we propose a generative Quantum Machine Learning Model, called the Ising Born Machine (IBM), which we show cannot, in the worst case, and up to suitable notions of error, be simulated efficiently by a classical device. We also show this holds for all the circuit families encountered during training. In particular, we explore quantum circuit learning using non-universal circuits derived from Ising Model Hamiltonians, which are implementable on near term quantum devices. We propose two novel training methods for the IBM by utilising the Stein Discrepancy and the Sinkhorn Divergence cost functions. We show numerically, both using a simulator within Rigetti’s Forest platform and on the Aspen-1 16Q chip, that the cost functions we suggest outperform the more commonly used Maximum Mean Discrepancy (MMD) for differentiable training. We also propose an improvement to the MMD by proposing a novel utilisation of quantum kernels which we demonstrate provides improvements over its classical counterpart. We discuss the potential of these methods to learn hard' quantum distributions, a feat which would demonstrate the advantage of quantum over classical computers, and provide the first formal definitions for what we call Quantum Learning Supremacy’. Finally, we propose a novel view on the area of quantum circuit compilation by using the IBM to `mimic’ target quantum circuits using classical output data only.

Efficient approximate unitary t-designs from partially invertible universal sets and their application to quantum speedup

At its core a $t$-design is a method for sampling from a set of unitaries in a way which mimics sampling randomly from the Haar measure on the unitary group, with applications across quantum information processing and physics. We construct new families of quantum circuits on $n$-qubits giving rise to $\varepsilon$-approximate unitary $t$-designs efficiently in $O(n^3t^2)$ depth. These quantum circuits are based on a relaxation of technical requirements in previous constructions. In particular, the construction of circuits which give efficient approximate $t$-designs by Brandao, Harrow, and Horodecki (F.G.S.L Brandao, A.W Harrow, and M. Horodecki, Commun. Math. Phys. (2016).) required choosing gates from ensembles which contained inverses for all elements, and that the entries of the unitaries are algebraic. We reduce these requirements, to sets that contain elements without inverses in the set, and non-algebraic entries, which we dub partially invertible universal sets. We then adapt this circuit construction to the framework of measurement based quantum computation(MBQC) and give new explicit examples of $n$-qubit graph states with fixed assignments of measurements (graph gadgets) giving rise to unitary $t$-designs based on partially invertible universal sets, in a natural way. We further show that these graph gadgets demonstrate a quantum speedup, up to standard complexity theoretic conjectures. We provide numerical and analytical evidence that almost any assignment of fixed measurement angles on an $n$-qubit cluster state give efficient $t$-designs and demonstrate a quantum speedup.

A simple protocol for fault tolerant verification of quantum computation

With experimental quantum computing technologies now in their infancy, the search for efficient means of testing the correctness of these quantum computations is becoming more pressing. An approach to the verification of quantum computation within the framework of interactive proofs has been fruitful for addressing this problem. Specifically, an untrusted agent (prover) alleging to perform quantum computations can have his claims verified by another agent (verifier) who only has access to classical computation and a small quantum device for preparing or measuring single qubits. However, when this quantum device is prone to errors, verification becomes challenging and often existing protocols address this by adding extra assumptions, such as requiring the noise in the device to be uncorrelated with the noise on the prover’s devices. In this paper, we present a simple protocol for verifying quantum computations, in the presence of noisy devices, with no extra assumptions. This protocol is based on post hoc techniques for verification, which allow for the prover to know the desired quantum computation and its input. We also perform a simulation of the protocol, for a one-qubit computation, and find the error thresholds when using the qubit repetition code as well as the Steane code.

Space QUEST mission proposal: Experimentally testing decoherence due to gravity

Models of quantum systems on curved space-times lack sufficient experimental verification. Some speculative theories suggest that quantum properties, such as entanglement, may exhibit entirely different behavior to purely classical systems. By measuring this effect or lack thereof, we can test the hypotheses behind several such models. For instance, as predicted by Ralph and coworkers [T C Ralph, G J Milburn, and T Downes, Phys. Rev. A, 79(2):22121, 2009, T C Ralph and J Pienaar, New Journal of Physics, 16(8):85008, 2014], a bipartite entangled system could decohere if each particle traversed through a different gravitational field gradient. We propose to study this effect in a ground to space uplink scenario. We extend the above theoretical predictions of Ralph and coworkers and discuss the scientific consequences of detecting/failing to detect the predicted gravitational decoherence. We present a detailed mission design of the European Space Agency’s (ESA) Space QUEST (Space - Quantum Entanglement Space Test) mission, and study the feasibility of the mission schema.

Information Theoretically Secure Hypothesis Test for Temporally Unstructured Quantum Computation (Extended Abstract)
Multiparty Delegated Quantum Computing
Unconditionally verifiable blind quantum computation

Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client’s input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.

Rigidity of quantum steering and one-sided device-independent verifiable quantum computation

The relationship between correlations and entanglement has played a major role in understanding quantum theory since the work of Einstein et al (1935 Phys. Rev. 47 777–80). Tsirelson proved that Bell states, shared among two parties, when measured suitably, achieve the maximum non-local correlations allowed by quantum mechanics (Cirel’son 1980 Lett. Math. Phys. 4 93–100). Conversely, Reichardt et al showed that observing the maximal correlation value over a sequence of repeated measurements, implies that the underlying quantum state is close to a tensor product of maximally entangled states and, moreover, that it is measured according to an ideal strategy (Reichardt et al 2013 Nature 496 456–60). However, this strong rigidity result comes at a high price, requiring a large number of entangled pairs to be tested. In this paper, we present a significant improvement in terms of the overhead by instead considering quantum steering where the device of the one side is trusted. We first demonstrate a robust one-sided device-independent version of self-testing, which characterises the shared state and measurement operators of two parties up to a certain bound. We show that this bound is optimal up to constant factors and we generalise the results for the most general attacks. This leads us to a rigidity theorem for maximal steering correlations. As a key application we give a one-sided device-independent protocol for verifiable delegated quantum computation, and compare it to other existing protocols, to highlight the cost of trust assumptions. Finally, we show that under reasonable assumptions, the states shared in order to run a certain type of verification protocol must be unitarily equivalent to perfect Bell states.

Verification of Quantum Computation: An Overview of Existing Approaches
Super- and subradiance of clock atoms in multimode optical waveguides

The transversely confined propagating modes of an optical fiber mediate virtually infinite range energy exchanges among atoms placed within their field, which adds to the inherent free space dipole-dipole coupling. Typically, the single atom free space decay rate largely surpasses the emission rate into the guided fiber modes. However, scaling up the atom number as well as the system size amounts to entering a collective emission regime, where fiber-induced superradiant spontaneous emission dominates over free space decay. We numerically study this super-and subradiant decay of highly excited atomic states for one or several transverse fiber modes as present in hollow core fibers. As particular excitation scenarios we compare the decay of a totally inverted state to the case of π/2 pulses applied transversely or along the fiber axis as in standard Ramsey or Rabi interferometry. While a mean field approach fails to correctly describe the initiation of superradiance, a second-order approximation accounting for pairwise atom-atom quantum correlations generally proves sufficient to reliably describe superradiance of ensembles from two to a few hundred particles. In contrast, a full account of subradiance requires the inclusion of all higher order quantum correlations. Considering multiple guided modes introduces a natural effective cutoff for the interaction range emerging from the dephasing of different fiber contributions.

QMA-Hardness of Consistency of Local Density Matrices with Applications to Quantum Zero-Knowledge
Public-Key Encryption with Quantum Keys

In the framework of Impagliazzo’s five worlds, a distinction is often made between two worlds, one where public-key encryption exists (Cryptomania), and one in which only one-way functions exist (MiniCrypt). However, the boundaries between these worlds can change when quantum information is taken into account. Recent work has shown that quantum variants of oblivious transfer and multi-party computation, both primitives that are classically in Cryptomania, can be constructed from one-way functions, placing them in the realm of quantum MiniCrypt (the so-called MiniQCrypt). This naturally raises the following question: Is it possible to construct a quantum variant of public-key encryption, which is at the heart of Cryptomania, from one-way functions or potentially weaker assumptions? In this work, we initiate the formal study of the notion of quantum public-key encryption (qPKE), i.e., public-key encryption where keys are allowed to be quantum states. We propose new definitions of security and several constructions of qPKE based on the existence of one-way functions (OWF), or even weaker assumptions, such as pseudorandom function-like states (PRFS) and pseudorandom function-like states with proof of destruction (PRFSPD). Finally, to give a tight characterization of this primitive, we show that computational assumptions are necessary to build quantum public-key encryption. That is, we give a self-contained proof that no quantum public-key encryption scheme can provide information-theoretic security.

Experimental cheat-sensitive quantum weak coin flipping

As in modern communication networks, the security of quantum networks will rely on complex cryptographic tasks that are based on a handful of fundamental primitives. Weak coin flipping (WCF) is a significant such primitive which allows two mistrustful parties to agree on a random bit while they favor opposite outcomes. Remarkably, perfect information-theoretic security can be achieved in principle for quantum WCF. Here, we overcome conceptual and practical issues that have prevented the experimental demonstration of this primitive to date, and demonstrate how quantum resources can provide cheat sensitivity, whereby each party can detect a cheating opponent, and an honest party is never sanctioned. Such a property is not known to be classically achievable with information-theoretic security. Our experiment implements a refined, loss-tolerant version of a recently proposed theoretical protocol and exploits heralded single photons generated by spontaneous parametric down conversion, a carefully optimized linear optical interferometer including beam splitters with variable reflectivities and a fast optical switch for the verification step. High values of our protocol benchmarks are maintained for attenuation corresponding to several kilometers of telecom optical fiber.

Dispelling myths on superposition attacks: formal security model and attack analyses
Device-independent and semi-device-independent entanglement certification in broadcast Bell scenarios

It has recently been shown that by broadcasting the subsystems of a bipartite quantum state, one can activate Bell nonlocality and significantly improve noise tolerance bounds for device-independent entanglement certification. In this work we strengthen these results and explore new aspects of this phenomenon. First, we prove new results related to the activation of Bell nonlocality. We construct Bell inequalities tailored to the broadcast scenario, and show how broadcasting can lead to even stronger notions of Bell nonlocality activation. In particular, we exploit these ideas to show that bipartite states admitting a local hidden-variable model for general measurements can lead to genuine tripartite nonlocal correlations. We then study device-independent entanglement certification in the broadcast scenario, and show through semidefinite programming techniques that device-independent entanglement certification is possible for the two-qubit Werner state in essentially the entire range of entanglement. Finally, we extend the concept of EPR steering to the broadcast scenario, and present novel examples of activation of the two-qubit isotropic state. Our results pave the way for broadcast-based device-independent and semi-device-independent protocols.

Cyber security in the quantum era
Certified Randomness From Steering Using Sequential Measurements

The generation of certifiable randomness is one of the most promising applications of quantum technologies. Furthermore, the intrinsic non-locality of quantum correlations allow us to certify randomness in a device-independent way, i.e. one need not make assumptions about the devices used. Due to the work of Curchod et. al., a single entangled two-qubit pure state can be used to produce arbitrary amounts of certified randomness. However, the obtaining of this randomness is experimentally challenging as it requires a large number of measurements, both projective and general. Motivated by these difficulties in the device-independent setting, we instead consider the scenario of one-sided device independence where certain devices are trusted, and others not; a scenario motivated by asymmetric experimental set-ups such as ion-photon networks. We show how certain aspects of previous work can be adapted to this scenario and provide theoretical bounds on the amount of randomness which can be certified. Furthermore, we give a protocol for unbounded randomness certification in this scenario, and provide numerical results demonstrating the protocol in the ideal case. Finally, we numerically test the possibility of implementing this scheme on near-term quantum technologies, by considering the performance of the protocol on several physical platforms.

Benchmarking of quantum protocols

Quantum network protocols offer new functionalities such as enhanced security to communication and computational systems. Despite the rapid progress in quantum hardware, it has not yet reached a level of maturity that enables execution of many quantum protocols in practical settings. To develop quantum protocols in real world, it is necessary to examine their performance considering the imperfections in their practical implementation using simulation platforms. In this paper, we consider several quantum protocols that enable promising functionalities and services in near-future quantum networks. The protocols are chosen from both areas of quantum communication and quantum computation as follows: quantum money, W-state based anonymous transmission, verifiable blind quantum computation, and quantum digital signature. We use NetSquid simulation platform to evaluate the effect of various sources of noise on the performance of these protocols, considering different figures of merit. We find that to enable quantum money protocol, the decoherence time constant of the quantum memory must be at least three times the storage time of qubits. Furthermore, our simulation results for the w-state based anonymous transmission protocol show that to achieve an average fidelity above 0.8 in this protocol, the storage time of sender’s and receiver’s particles in the quantum memory must be less than half of the decoherence time constant of the quantum memory. We have also investigated the effect of gate imperfections on the performance of verifiable blind quantum computation. We find that with our chosen parameters, if the depolarizing probability of quantum gates is equal to or greater than 0.05, the security of the protocol cannot be guaranteed. Lastly, our simulation results for quantum digital signature protocol show that channel loss has a significant effect on the probability of repudiation.

A step closer to secure global communication