Simultaneous perturbation stochastic approximation optimizer¶
Simultaneous Perturbation Stochastic Approximation (SPSA) is a gradient-free optimization method that uses stochastic approximations of the gradient. Unlike real gradient-based methods like gradient descent, SPSA does not require knowledge of the gradient of the function being optimized. Instead, SPSA estimates the gradient by perturbing the parameters of the function in a random direction and observing the resulting change in the cost function. The update rule for SPSA is given by:
where
and can be controlled by the following hyperparameters:
- scaling parameter
On the other hand,
where
and
which can be controlled by the hyperparameters:
- scaling parameter
By iteratively applying the update rule with appropriately chosen hyperparameters, SPSA can converge to a local minimum of the function being optimized.
OpenQAOA example¶
In the code below it is shown how to run QAOA with the SPSA optimizer, using the following hyperparameters: