Trainability and Expressivity of Hamming-Weight Preserving Quantum Circuits for Machine Learning

Abstract

Quantum machine learning has become a promising area for real world applications of quantum computers, but near-term methods and their scalability are still important research topics. In this context, we analyze the trainability and controllability of specific Hamming weight preserving quantum circuits. These circuits use gates that preserve subspaces of the Hilbert space, spanned by basis states with fixed Hamming weight $k$. They are good candidates for mimicking neural networks, by both loading classical data and performing trainable layers. In this work, we first design and prove the feasibility of new heuristic data loaders, performing quantum amplitude encoding of $\binom{n}{k}$-dimensional vectors by training a n-qubit quantum circuit. Then, we analyze more generally the trainability of Hamming weight preserving circuits, and show that the variance of their gradients is bounded according to the size of the preserved subspace. This proves the conditions of existence of Barren Plateaus for these circuits, and highlights a setting where a recent conjecture on the link between controllability and trainability of variational quantum circuits does not apply.

Publication
Trainability and Expressivity of Hamming-Weight Preserving Quantum Circuits for Machine Learning

Quantum machine learning has become a promising area for real world applications of quantum computers, but near-term methods and their scalability are still important research topics. In this context, we analyze the trainability and controllability of specific Hamming weight preserving quantum circuits. These circuits use gates that preserve subspaces of the Hilbert space, spanned by basis states with fixed Hamming weight $k$. They are good candidates for mimicking neural networks, by both loading classical data and performing trainable layers. In this work, we first design and prove the feasibility of new heuristic data loaders, performing quantum amplitude encoding of $\binom{n}{k}$-dimensional vectors by training a n-qubit quantum circuit. Then, we analyze more generally the trainability of Hamming weight preserving circuits, and show that the variance of their gradients is bounded according to the size of the preserved subspace. This proves the conditions of existence of Barren Plateaus for these circuits, and highlights a setting where a recent conjecture on the link between controllability and trainability of variational quantum circuits does not apply.