Cost-Function-Dependent Barren Plateaus in Shallow Quantum Neural Networks
Variational quantum algorithms (VQAs) optimize the parameters θ of a quantum neural network V(θ) to minimize a cost function C. While VQAs may enable practical applications of noisy quantum computers, they are nevertheless heuristic methods with unproven scaling. Here, we rigorously prove two results. Our first result states that defining C in terms of global observables leads to an exponentially vanishing gradient (i.e., a barren plateau) even when V(θ) is shallow. This implies that several VQAs in the literature must revise their proposed cost functions. On the other hand, our second result states that defining C with local observables leads to a polynomially vanishing gradient, so long as the depth of V(θ) is O(log n). Taken together, our results establish a precise connection between locality and trainability. Finally, we illustrate these ideas with large-scale simulations, up to 100 qubits, of a particular VQA known as quantum autoencoders.
READ FULL TEXT