Machine learning enabled the resolution of many real world problems that other traditional computational method struggled to solve, or could solve in more expensive ways. Quantum computing is a paradigm of computation using the quantum states of matter that enables a large computational speed up on some problems. The recent progress in the development of quantum hardware encouraged the research of concrete applications of quantum computers. It then became natural to look for ways in which quantum computers can be applied for machine learning. This thesis is a contribution towards this goal. The first part aims at understanding the general capabilities of variational quantum circuits (VQC) for machine learning tasks given vector data inputs, and have a clearer idea on the necessary conditions in order to expect a quantum advantage. VQCs are a family of quantum algorithms where one finds gates parameters that minimizes a cost function, in the same way as neural networks. They are effectively linear models in a high dimensional feature space. I show that although VQCs are costly to evaluate, one can sometimes construct cheap classical approximators called classical surrogate using the technique of random features regression. If this approximation is possible, the quantum advantage is limited. I also highlight the fact that learning a classical model on the same feature map will lead to a solution called the Minimum Norm Least Square (MNLS) estimator, but the training dynamics of the quantum circuits will not necessarily lead to the same solution. This separation is the source of quantum advantage, I show that it is sufficient that the weight vector of quantum models has a large norm, and I give concrete examples. The second part explores the use of quantum computers to perform machine learning tasks on graph structured data. Machine learning on graph data encompasses many real world applications, and algorithms for vector data cannot be directly applied. I aimed in this part to develop quantum algorithms adapted to the graph structure of the data. The main idea is to encode the graph into a Hamiltonian that has the same topology. One then prepares a quantum state by evolving this Hamiltonian, and the measurements are incorporated in a classical machine learning algorithm. This approach is especially suited to neutral atoms quantum computers. With such platforms, one can indeed easily create a quantum system with the desired connectivity, and the geometry can be changed at each run. I developed a large family of algorithms, inspired by kernels, graphs neural networks, and transformers with the intention to be ran on current hardware. I performed numerical experiments on large scale datasets, and described the results of an experimental implementation on the hardware of Pasqal.
Machine learning enabled the resolution of many real world problems that other traditional computational method struggled to solve, or could solve in more expensive ways. Quantum computing is a paradigm of computation using the quantum states of matter that enables a large computational speed up on some problems. The recent progress in the development of quantum hardware encouraged the research of concrete applications of quantum computers. It then became natural to look for ways in which quantum computers can be applied for machine learning. This thesis is a contribution towards this goal. The first part aims at understanding the general capabilities of variational quantum circuits (VQC) for machine learning tasks given vector data inputs, and have a clearer idea on the necessary conditions in order to expect a quantum advantage. VQCs are a family of quantum algorithms where one finds gates parameters that minimizes a cost function, in the same way as neural networks. They are effectively linear models in a high dimensional feature space. I show that although VQCs are costly to evaluate, one can sometimes construct cheap classical approximators called classical surrogate using the technique of random features regression. If this approximation is possible, the quantum advantage is limited. I also highlight the fact that learning a classical model on the same feature map will lead to a solution called the Minimum Norm Least Square (MNLS) estimator, but the training dynamics of the quantum circuits will not necessarily lead to the same solution. This separation is the source of quantum advantage, I show that it is sufficient that the weight vector of quantum models has a large norm, and I give concrete examples. The second part explores the use of quantum computers to perform machine learning tasks on graph structured data. Machine learning on graph data encompasses many real world applications, and algorithms for vector data cannot be directly applied. I aimed in this part to develop quantum algorithms adapted to the graph structure of the data. The main idea is to encode the graph into a Hamiltonian that has the same topology. One then prepares a quantum state by evolving this Hamiltonian, and the measurements are incorporated in a classical machine learning algorithm. This approach is especially suited to neutral atoms quantum computers. With such platforms, one can indeed easily create a quantum system with the desired connectivity, and the geometry can be changed at each run. I developed a large family of algorithms, inspired by kernels, graphs neural networks, and transformers with the intention to be ran on current hardware. I performed numerical experiments on large scale datasets, and described the results of an experimental implementation on the hardware of Pasqal.