Models¶
Quantum Boltzmann Machine¶
-
class
zyglrox.models.qbm.
QuantumBoltzmannMachine
(nqubits: int, qc: zyglrox.core.circuit.QuantumCircuit, pdf: numpy.ndarray, hamiltonian, optimizer=None, **kwargs)¶ Bases:
object
Quantum Boltzmann Machine according to Kappen (2019)
Initialize
- Args:
- nqubits (int):
The number of qubits in the system.
- qc (QuantumCircuit):
Parametrized quantum circuit for \(N\) spins.
- pdf (np.ndarray):
array containing the target distribution \(q(x)\).
- optimizer (tf.optimizer.Optimizer):
Tensorflow optimizer.
- Returns (inplace):
None
-
train
(eta_qbm=0.001, tol_qbm=0.0001, tol_qve=1e-08, epochs_qbm=100, epochs_qve=1500)¶ Train the quantum Boltzmann machine. We minimize the value of the gradient \(\frac{1}{M}\sum_i^M |\nabla_{\theta_i} \mathcal{L}(\theta)|\) as defined above
- Args:
- eta_qbm (float):
Learning rate for the
QuantumBoltzmannMachine
.- tol_qbm (float):
Tolerance \(\epsilon_{qbm}\) on the mean squared error of the statistics. If the absolute difference between iterations is smaller than this value, training stops.
QuantumBoltzmannMachine
.- tol_qve (float):
Tolerance \(\epsilon_{qve}\) on the energy. If the absolute difference between iterations is smaller than this value, training stops.
- epochs_qbm (int):
Number of max iterations for the training algorithm of the
QuantumBoltzmannMachine
.- epochs_qve (int):
Number of max iterations for the training algorithm
QuantumVariationalEigensolver
.
- Returns (inplace):
None
-
plot
()¶ Plot the convergence of the
QuantumBoltzmannMachine
learning step. Convergence is obtained when \(\frac{1}{M}\sum_i^M |\nabla_{\theta_i} \mathcal{L}(\theta)|<\epsilon_{qbm}\), where \(\epsilon_{qbm}\) is the tolerance defined when calling thetrain
method.- Returns (inplace):
None
-
get_statistics
(plot=True)¶ Compare the statistics of learned
QuantumBoltzmannMachine
with the target ground state statistics.\[\begin{split}\text{MSE}_{h} = \frac{1}{M} \sum_i^M (\langle \sigma_i \rangle - \langle \hat{\sigma}_i \rangle)^2 \\ \text{MSE}_{w} = \frac{1}{M} \sum_{i,j}^M (\langle \sigma_i \sigma_j \rangle - \langle \hat{\sigma}_i \hat{\sigma}_j\rangle)^2\end{split}\]Warning
We do not for sure know if the
QuantumVariatonalEigensolver
was succesful in obtaining the ground state of the model QBM hamiltonian unless we compare it with the exact ground state first.- Args:
- plot (bool):
Whether to plot the training schedule.
- Returns (dict):
Dict with entries ‘field’ and ‘coupling’ with the respective MSE between the circuit and true statistics.
Quantum Variational Eigensolver¶
We have both a Gradient-based and Derivative-free Quantum Variational Eigensolver.
Quantum Variational Eigensolver according to Peruzzo et al.(2014) with gradient based optimization.. |
|
Quantum Variational Eigensolver according to Peruzzo et al.(2014) using only gradient free optimizers.. |