View on TensorFlow.org | Run in Google Colab | View source on GitHub | Download notebook |
Noise is present in modern day quantum computers. Qubits are susceptible to interference from the surrounding environment, imperfect fabrication, TLS and sometimes even gamma rays. Until large scale error correction is reached, the algorithms of today must be able to remain functional in the presence of noise. This makes testing algorithms under noise an important step for validating quantum algorithms / models will function on the quantum computers of today.
In this tutorial you will explore the basics of noisy circuit simulation in TFQ via the high level tfq.layers
API.
Setup
pip install tensorflow==2.15.0 tensorflow-quantum==0.7.3
pip install -q git+https://github.com/tensorflow/docs
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
/tmpfs/tmp/ipykernel_23668/1875984233.py:2: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html import importlib, pkg_resources <module 'pkg_resources' from '/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/pkg_resources/__init__.py'>
import random
import cirq
import sympy
import tensorflow_quantum as tfq
import tensorflow as tf
import numpy as np
# Plotting
import matplotlib.pyplot as plt
import tensorflow_docs as tfdocs
import tensorflow_docs.plots
2024-12-14 12:37:14.034828: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-12-14 12:37:14.034880: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-12-14 12:37:14.036450: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-12-14 12:37:15.999594: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:274] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
1. Understanding quantum noise
1.1 Basic circuit noise
Noise on a quantum computer impacts the bitstring samples you are able to measure from it. One intuitive way you can start to think about this is that a noisy quantum computer will "insert", "delete" or "replace" gates in random places like the diagram below:
Building off of this intuition, when dealing with noise, you are no longer using a single pure state \(|\psi \rangle\) but instead dealing with an ensemble of all possible noisy realizations of your desired circuit: \(\rho = \sum_j p_j |\psi_j \rangle \langle \psi_j |\) . Where \(p_j\) gives the probability that the system is in \(|\psi_j \rangle\) .
Revisiting the above picture, if we knew beforehand that 90% of the time our system executed perfectly, or errored 10% of the time with just this one mode of failure, then our ensemble would be:
\(\rho = 0.9 |\psi_\text{desired} \rangle \langle \psi_\text{desired}| + 0.1 |\psi_\text{noisy} \rangle \langle \psi_\text{noisy}| \)
If there was more than just one way that our circuit could error, then the ensemble \(\rho\) would contain more than just two terms (one for each new noisy realization that could happen). \(\rho\) is referred to as the density matrix describing your noisy system.
1.2 Using channels to model circuit noise
Unfortunately in practice it's nearly impossible to know all the ways your circuit might error and their exact probabilities. A simplifying assumption you can make is that after each operation in your circuit there is some kind of channel that roughly captures how that operation might error. You can quickly create a circuit with some noise:
def x_circuit(qubits):
"""Produces an X wall circuit on `qubits`."""
return cirq.Circuit(cirq.X.on_each(*qubits))
def make_noisy(circuit, p):
"""Add a depolarization channel to all qubits in `circuit` before measurement."""
return circuit + cirq.Circuit(cirq.depolarize(p).on_each(*circuit.all_qubits()))
my_qubits = cirq.GridQubit.rect(1, 2)
my_circuit = x_circuit(my_qubits)
my_noisy_circuit = make_noisy(my_circuit, 0.5)
my_circuit
my_noisy_circuit
You can examine the noiseless density matrix \(\rho\) with:
rho = cirq.final_density_matrix(my_circuit)
np.round(rho, 3)
array([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j]], dtype=complex64)
And the noisy density matrix \(\rho\) with:
rho = cirq.final_density_matrix(my_noisy_circuit)
np.round(rho, 3)
array([[0.111+0.j, 0. +0.j, 0. +0.j, 0. +0.j], [0. +0.j, 0.222+0.j, 0. +0.j, 0. +0.j], [0. +0.j, 0. +0.j, 0.222+0.j, 0. +0.j], [0. +0.j, 0. +0.j, 0. +0.j, 0.444+0.j]], dtype=complex64)
Comparing the two different \( \rho \) 's you can see that the noise has impacted the amplitudes of the state (and consequently sampling probabilities). In the noiseless case you would always expect to sample the \( |11\rangle \) state. But in the noisy state there is now a nonzero probability of sampling \( |00\rangle \) or \( |01\rangle \) or \( |10\rangle \) as well:
"""Sample from my_noisy_circuit."""
def plot_samples(circuit):
samples = cirq.sample(circuit + cirq.measure(*circuit.all_qubits(), key='bits'), repetitions=1000)
freqs, _ = np.histogram(samples.data['bits'], bins=[i+0.01 for i in range(-1,2** len(my_qubits))])
plt.figure(figsize=(10,5))
plt.title('Noisy Circuit Sampling')
plt.xlabel('Bitstring')
plt.ylabel('Frequency')
plt.bar([i for i in range(2** len(my_qubits))], freqs, tick_label=['00','01','10','11'])
plot_samples(my_noisy_circuit)
Without any noise you will always get \(|11\rangle\):
"""Sample from my_circuit."""
plot_samples(my_circuit)
If you increase the noise a little further it will become harder and harder to distinguish the desired behavior (sampling \(|11\rangle\) ) from the noise:
my_really_noisy_circuit = make_noisy(my_circuit, 0.75)
plot_samples(my_really_noisy_circuit)
2. Basic noise in TFQ
With this understanding of how noise can impact circuit execution, you can explore how noise works in TFQ. TensorFlow Quantum uses monte-carlo / trajectory based simulation as an alternative to density matrix simulation. This is because the memory complexity of density matrix simulation limits large simulations to being <= 20 qubits with traditional full density matrix simulation methods. Monte-carlo / trajectory trades this cost in memory for additional cost in time. The backend='noisy'
option available to all tfq.layers.Sample
, tfq.layers.SampledExpectation
and tfq.layers.Expectation
(In the case of Expectation
this does add a required repetitions
parameter).
2.1 Noisy sampling in TFQ
To recreate the above plots using TFQ and trajectory simulation you can use tfq.layers.Sample
"""Draw bitstring samples from `my_noisy_circuit`"""
bitstrings = tfq.layers.Sample(backend='noisy')(my_noisy_circuit, repetitions=1000)
numeric_values = np.einsum('ijk,k->ij', bitstrings.to_tensor().numpy(), [1, 2])[0]
freqs, _ = np.histogram(numeric_values, bins=[i+0.01 for i in range(-1,2** len(my_qubits))])
plt.figure(figsize=(10,5))
plt.title('Noisy Circuit Sampling')
plt.xlabel('Bitstring')
plt.ylabel('Frequency')
plt.bar([i for i in range(2** len(my_qubits))], freqs, tick_label=['00','01','10','11'])
<BarContainer object of 4 artists>
2.2 Noisy sample based expectation
To do noisy sample based expectation calculation you can use tfq.layers.SampleExpectation
:
some_observables = [cirq.X(my_qubits[0]), cirq.Z(my_qubits[0]), 3.0 * cirq.Y(my_qubits[1]) + 1]
some_observables
[cirq.X(cirq.GridQubit(0, 0)), cirq.Z(cirq.GridQubit(0, 0)), cirq.PauliSum(cirq.LinearDict({frozenset({(cirq.GridQubit(0, 1), cirq.Y)}): (3+0j), frozenset(): (1+0j)}))]
Compute the noiseless expectation estimates via sampling from the circuit:
noiseless_sampled_expectation = tfq.layers.SampledExpectation(backend='noiseless')(
my_circuit, operators=some_observables, repetitions=10000
)
noiseless_sampled_expectation.numpy()
array([[-0.0042, -1. , 0.9712]], dtype=float32)
Compare those with the noisy versions:
noisy_sampled_expectation = tfq.layers.SampledExpectation(backend='noisy')(
[my_noisy_circuit, my_really_noisy_circuit], operators=some_observables, repetitions=10000
)
noisy_sampled_expectation.numpy()
/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/initializers/initializers.py:120: UserWarning: The initializer RandomUniform is unseeded and being called multiple times, which will return identical values each time (even if the initializer is unseeded). Please update your code to provide a seed to the initializer, or avoid using the same initializer instance more than once. warnings.warn( array([[-0.0022 , -0.3368 , 0.979 ], [ 0.014 , 0.011 , 1.0732001]], dtype=float32)
You can see that the noise has particularly impacted the \(\langle \psi | Z | \psi \rangle\) accuracy, with my_really_noisy_circuit
concentrating very quickly towards 0.
2.3 Noisy analytic expectation calculation
Doing noisy analytic expectation calculations is nearly identical to above:
noiseless_analytic_expectation = tfq.layers.Expectation(backend='noiseless')(
my_circuit, operators=some_observables
)
noiseless_analytic_expectation.numpy()
array([[ 1.9106853e-15, -1.0000000e+00, 1.0000002e+00]], dtype=float32)
noisy_analytic_expectation = tfq.layers.Expectation(backend='noisy')(
[my_noisy_circuit, my_really_noisy_circuit], operators=some_observables, repetitions=10000
)
noisy_analytic_expectation.numpy()
array([[ 1.9106853e-15, -3.3339995e-01, 1.0000000e+00], [ 1.9106855e-15, -5.5999989e-03, 1.0000000e+00]], dtype=float32)
3. Hybrid models and quantum data noise
Now that you have implemented some noisy circuit simulations in TFQ, you can experiment with how noise impacts quantum and hybrid quantum classical models, by comparing and contrasting their noisy vs noiseless performance. A good first check to see if a model or algorithm is robust to noise is to test under a circuit wide depolarizing model which looks something like this:
Where each time slice of the circuit (sometimes referred to as moment) has a depolarizing channel appended after each gate operation in that time slice. The depolarizing channel with apply one of \(\{X, Y, Z \}\) with probability \(p\) or apply nothing (keep the original operation) with probability \(1-p\).
3.1 Data
For this example you can use some prepared circuits in the tfq.datasets
module as training data:
qubits = cirq.GridQubit.rect(1, 8)
circuits, labels, pauli_sums, _ = tfq.datasets.xxz_chain(qubits, 'closed')
circuits[0]
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/quantum/spin_systems/XXZ_chain.zip 184449737/184449737 [==============================] - 2s 0us/step
Writing a small helper function will help to generate the data for the noisy vs noiseless case:
def get_data(qubits, depolarize_p=0.):
"""Return quantum data circuits and labels in `tf.Tensor` form."""
circuits, labels, pauli_sums, _ = tfq.datasets.xxz_chain(qubits, 'closed')
if depolarize_p >= 1e-5:
circuits = [circuit.with_noise(cirq.depolarize(depolarize_p)) for circuit in circuits]
tmp = list(zip(circuits, labels))
random.shuffle(tmp)
circuits_tensor = tfq.convert_to_tensor([x[0] for x in tmp])
labels_tensor = tf.convert_to_tensor([x[1] for x in tmp])
return circuits_tensor, labels_tensor
3.2 Define a model circuit
Now that you have quantum data in the form of circuits, you will need a circuit to model this data, like with the data you can write a helper function to generate this circuit optionally containing noise:
def modelling_circuit(qubits, depth, depolarize_p=0.):
"""A simple classifier circuit."""
dim = len(qubits)
ret = cirq.Circuit(cirq.H.on_each(*qubits))
for i in range(depth):
# Entangle layer.
ret += cirq.Circuit(cirq.CX(q1, q2) for (q1, q2) in zip(qubits[::2], qubits[1::2]))
ret += cirq.Circuit(cirq.CX(q1, q2) for (q1, q2) in zip(qubits[1::2], qubits[2::2]))
# Learnable rotation layer.
# i_params = sympy.symbols(f'layer-{i}-0:{dim}')
param = sympy.Symbol(f'layer-{i}')
single_qb = cirq.X
if i % 2 == 1:
single_qb = cirq.Y
ret += cirq.Circuit(single_qb(q) ** param for q in qubits)
if depolarize_p >= 1e-5:
ret = ret.with_noise(cirq.depolarize(depolarize_p))
return ret, [op(q) for q in qubits for op in [cirq.X, cirq.Y, cirq.Z]]
modelling_circuit(qubits, 3)[0]
3.3 Model building and training
With your data and model circuit built, the final helper function you will need is one that can assemble both a noisy or a noiseless hybrid quantum tf.keras.Model
:
def build_keras_model(qubits, depolarize_p=0.):
"""Prepare a noisy hybrid quantum classical Keras model."""
spin_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
circuit_and_readout = modelling_circuit(qubits, 4, depolarize_p)
if depolarize_p >= 1e-5:
quantum_model = tfq.layers.NoisyPQC(*circuit_and_readout, sample_based=False, repetitions=10)(spin_input)
else:
quantum_model = tfq.layers.PQC(*circuit_and_readout)(spin_input)
intermediate = tf.keras.layers.Dense(4, activation='sigmoid')(quantum_model)
post_process = tf.keras.layers.Dense(1)(intermediate)
return tf.keras.Model(inputs=[spin_input], outputs=[post_process])
4. Compare performance
4.1 Noiseless baseline
With your data generation and model building code, you can now compare and contrast model performance in the noiseless and noisy settings, first you can run a reference noiseless training:
training_histories = dict()
depolarize_p = 0.
n_epochs = 50
phase_classifier = build_keras_model(qubits, depolarize_p)
phase_classifier.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
# Show the keras plot of the model
tf.keras.utils.plot_model(phase_classifier, show_shapes=True, dpi=70)
noiseless_data, noiseless_labels = get_data(qubits, depolarize_p)
training_histories['noiseless'] = phase_classifier.fit(x=noiseless_data,
y=noiseless_labels,
batch_size=16,
epochs=n_epochs,
validation_split=0.15,
verbose=1)
Epoch 1/50 4/4 [==============================] - 1s 128ms/step - loss: 0.7064 - accuracy: 0.5156 - val_loss: 0.7623 - val_accuracy: 0.2500 Epoch 2/50 4/4 [==============================] - 0s 68ms/step - loss: 0.6883 - accuracy: 0.5156 - val_loss: 0.7200 - val_accuracy: 0.2500 Epoch 3/50 4/4 [==============================] - 0s 66ms/step - loss: 0.6821 - accuracy: 0.5156 - val_loss: 0.6877 - val_accuracy: 0.2500 Epoch 4/50 4/4 [==============================] - 0s 62ms/step - loss: 0.6800 - accuracy: 0.5156 - val_loss: 0.6646 - val_accuracy: 0.2500 Epoch 5/50 4/4 [==============================] - 0s 62ms/step - loss: 0.6738 - accuracy: 0.5156 - val_loss: 0.6556 - val_accuracy: 0.2500 Epoch 6/50 4/4 [==============================] - 0s 62ms/step - loss: 0.6706 - accuracy: 0.5156 - val_loss: 0.6464 - val_accuracy: 0.2500 Epoch 7/50 4/4 [==============================] - 0s 62ms/step - loss: 0.6629 - accuracy: 0.5156 - val_loss: 0.6530 - val_accuracy: 0.2500 Epoch 8/50 4/4 [==============================] - 0s 61ms/step - loss: 0.6547 - accuracy: 0.5156 - val_loss: 0.6561 - val_accuracy: 0.2500 Epoch 9/50 4/4 [==============================] - 0s 63ms/step - loss: 0.6460 - accuracy: 0.5156 - val_loss: 0.6536 - val_accuracy: 0.2500 Epoch 10/50 4/4 [==============================] - 0s 63ms/step - loss: 0.6376 - accuracy: 0.5156 - val_loss: 0.6552 - val_accuracy: 0.2500 Epoch 11/50 4/4 [==============================] - 0s 61ms/step - loss: 0.6258 - accuracy: 0.5156 - val_loss: 0.6471 - val_accuracy: 0.2500 Epoch 12/50 4/4 [==============================] - 0s 61ms/step - loss: 0.6124 - accuracy: 0.5156 - val_loss: 0.6324 - val_accuracy: 0.2500 Epoch 13/50 4/4 [==============================] - 0s 60ms/step - loss: 0.5989 - accuracy: 0.5156 - val_loss: 0.6230 - val_accuracy: 0.2500 Epoch 14/50 4/4 [==============================] - 0s 62ms/step - loss: 0.5806 - accuracy: 0.5156 - val_loss: 0.6053 - val_accuracy: 0.2500 Epoch 15/50 4/4 [==============================] - 0s 62ms/step - loss: 0.5617 - accuracy: 0.5156 - val_loss: 0.5858 - val_accuracy: 0.2500 Epoch 16/50 4/4 [==============================] - 0s 60ms/step - loss: 0.5401 - accuracy: 0.6250 - val_loss: 0.5667 - val_accuracy: 0.4167 Epoch 17/50 4/4 [==============================] - 0s 59ms/step - loss: 0.5195 - accuracy: 0.6875 - val_loss: 0.5379 - val_accuracy: 0.5833 Epoch 18/50 4/4 [==============================] - 0s 59ms/step - loss: 0.4961 - accuracy: 0.7344 - val_loss: 0.5183 - val_accuracy: 0.5833 Epoch 19/50 4/4 [==============================] - 0s 59ms/step - loss: 0.4698 - accuracy: 0.7969 - val_loss: 0.5065 - val_accuracy: 0.5833 Epoch 20/50 4/4 [==============================] - 0s 60ms/step - loss: 0.4467 - accuracy: 0.7969 - val_loss: 0.5014 - val_accuracy: 0.5833 Epoch 21/50 4/4 [==============================] - 0s 60ms/step - loss: 0.4231 - accuracy: 0.7969 - val_loss: 0.4707 - val_accuracy: 0.5833 Epoch 22/50 4/4 [==============================] - 0s 59ms/step - loss: 0.3996 - accuracy: 0.8594 - val_loss: 0.4418 - val_accuracy: 0.6667 Epoch 23/50 4/4 [==============================] - 0s 62ms/step - loss: 0.3776 - accuracy: 0.8906 - val_loss: 0.4209 - val_accuracy: 0.7500 Epoch 24/50 4/4 [==============================] - 0s 59ms/step - loss: 0.3552 - accuracy: 0.8750 - val_loss: 0.4235 - val_accuracy: 0.6667 Epoch 25/50 4/4 [==============================] - 0s 59ms/step - loss: 0.3342 - accuracy: 0.8750 - val_loss: 0.3945 - val_accuracy: 0.7500 Epoch 26/50 4/4 [==============================] - 0s 59ms/step - loss: 0.3136 - accuracy: 0.8906 - val_loss: 0.3658 - val_accuracy: 0.8333 Epoch 27/50 4/4 [==============================] - 0s 60ms/step - loss: 0.2996 - accuracy: 0.9062 - val_loss: 0.3533 - val_accuracy: 0.8333 Epoch 28/50 4/4 [==============================] - 0s 60ms/step - loss: 0.2802 - accuracy: 0.9062 - val_loss: 0.3455 - val_accuracy: 0.8333 Epoch 29/50 4/4 [==============================] - 0s 59ms/step - loss: 0.2652 - accuracy: 0.9062 - val_loss: 0.3150 - val_accuracy: 0.8333 Epoch 30/50 4/4 [==============================] - 0s 59ms/step - loss: 0.2509 - accuracy: 0.9375 - val_loss: 0.3065 - val_accuracy: 0.8333 Epoch 31/50 4/4 [==============================] - 0s 59ms/step - loss: 0.2389 - accuracy: 0.9219 - val_loss: 0.2997 - val_accuracy: 0.8333 Epoch 32/50 4/4 [==============================] - 0s 59ms/step - loss: 0.2260 - accuracy: 0.9375 - val_loss: 0.2870 - val_accuracy: 0.8333 Epoch 33/50 4/4 [==============================] - 0s 58ms/step - loss: 0.2199 - accuracy: 0.9375 - val_loss: 0.2866 - val_accuracy: 0.8333 Epoch 34/50 4/4 [==============================] - 0s 59ms/step - loss: 0.2156 - accuracy: 0.9375 - val_loss: 0.2437 - val_accuracy: 0.9167 Epoch 35/50 4/4 [==============================] - 0s 59ms/step - loss: 0.2013 - accuracy: 0.9688 - val_loss: 0.2427 - val_accuracy: 0.8333 Epoch 36/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1951 - accuracy: 0.9375 - val_loss: 0.2822 - val_accuracy: 0.8333 Epoch 37/50 4/4 [==============================] - 0s 60ms/step - loss: 0.1881 - accuracy: 0.9219 - val_loss: 0.2562 - val_accuracy: 0.8333 Epoch 38/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1830 - accuracy: 0.9531 - val_loss: 0.2107 - val_accuracy: 0.9167 Epoch 39/50 4/4 [==============================] - 0s 60ms/step - loss: 0.1757 - accuracy: 0.9688 - val_loss: 0.2147 - val_accuracy: 0.9167 Epoch 40/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1667 - accuracy: 0.9688 - val_loss: 0.2235 - val_accuracy: 0.8333 Epoch 41/50 4/4 [==============================] - 0s 60ms/step - loss: 0.1677 - accuracy: 0.9219 - val_loss: 0.2365 - val_accuracy: 0.8333 Epoch 42/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1550 - accuracy: 0.9531 - val_loss: 0.2064 - val_accuracy: 0.8333 Epoch 43/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1587 - accuracy: 0.9688 - val_loss: 0.1898 - val_accuracy: 0.9167 Epoch 44/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1497 - accuracy: 0.9688 - val_loss: 0.2048 - val_accuracy: 0.8333 Epoch 45/50 4/4 [==============================] - 0s 58ms/step - loss: 0.1448 - accuracy: 0.9688 - val_loss: 0.1962 - val_accuracy: 0.8333 Epoch 46/50 4/4 [==============================] - 0s 58ms/step - loss: 0.1425 - accuracy: 0.9688 - val_loss: 0.1958 - val_accuracy: 0.8333 Epoch 47/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1401 - accuracy: 0.9688 - val_loss: 0.1796 - val_accuracy: 0.9167 Epoch 48/50 4/4 [==============================] - 0s 60ms/step - loss: 0.1333 - accuracy: 0.9688 - val_loss: 0.1877 - val_accuracy: 0.8333 Epoch 49/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1297 - accuracy: 0.9688 - val_loss: 0.1774 - val_accuracy: 0.9167 Epoch 50/50 4/4 [==============================] - 0s 59ms/step - loss: 0.1267 - accuracy: 0.9688 - val_loss: 0.1765 - val_accuracy: 0.8333
And explore the results and accuracy:
loss_plotter = tfdocs.plots.HistoryPlotter(metric = 'loss', smoothing_std=10)
loss_plotter.plot(training_histories)
acc_plotter = tfdocs.plots.HistoryPlotter(metric = 'accuracy', smoothing_std=10)
acc_plotter.plot(training_histories)
4.2 Noisy comparison
Now you can build a new model with noisy structure and compare to the above, the code is nearly identical:
depolarize_p = 0.001
n_epochs = 50
noisy_phase_classifier = build_keras_model(qubits, depolarize_p)
noisy_phase_classifier.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
# Show the keras plot of the model
tf.keras.utils.plot_model(noisy_phase_classifier, show_shapes=True, dpi=70)
noisy_data, noisy_labels = get_data(qubits, depolarize_p)
training_histories['noisy'] = noisy_phase_classifier.fit(x=noisy_data,
y=noisy_labels,
batch_size=16,
epochs=n_epochs,
validation_split=0.15,
verbose=1)
Epoch 1/50 4/4 [==============================] - 9s 1s/step - loss: 0.8587 - accuracy: 0.5156 - val_loss: 0.7489 - val_accuracy: 0.5833 Epoch 2/50 4/4 [==============================] - 5s 1s/step - loss: 0.7983 - accuracy: 0.5156 - val_loss: 0.7018 - val_accuracy: 0.5833 Epoch 3/50 4/4 [==============================] - 5s 1s/step - loss: 0.7408 - accuracy: 0.5156 - val_loss: 0.6818 - val_accuracy: 0.5833 Epoch 4/50 4/4 [==============================] - 5s 1s/step - loss: 0.7100 - accuracy: 0.5625 - val_loss: 0.6729 - val_accuracy: 0.4167 Epoch 5/50 4/4 [==============================] - 5s 1s/step - loss: 0.6942 - accuracy: 0.4844 - val_loss: 0.6789 - val_accuracy: 0.4167 Epoch 6/50 4/4 [==============================] - 5s 1s/step - loss: 0.6881 - accuracy: 0.4844 - val_loss: 0.6802 - val_accuracy: 0.4167 Epoch 7/50 4/4 [==============================] - 5s 1s/step - loss: 0.6885 - accuracy: 0.4844 - val_loss: 0.6798 - val_accuracy: 0.4167 Epoch 8/50 4/4 [==============================] - 5s 1s/step - loss: 0.6861 - accuracy: 0.4844 - val_loss: 0.6893 - val_accuracy: 0.4167 Epoch 9/50 4/4 [==============================] - 5s 1s/step - loss: 0.6866 - accuracy: 0.4844 - val_loss: 0.6849 - val_accuracy: 0.4167 Epoch 10/50 4/4 [==============================] - 5s 1s/step - loss: 0.6851 - accuracy: 0.4844 - val_loss: 0.6805 - val_accuracy: 0.4167 Epoch 11/50 4/4 [==============================] - 5s 1s/step - loss: 0.6825 - accuracy: 0.4844 - val_loss: 0.6790 - val_accuracy: 0.4167 Epoch 12/50 4/4 [==============================] - 5s 1s/step - loss: 0.6808 - accuracy: 0.4844 - val_loss: 0.6746 - val_accuracy: 0.4167 Epoch 13/50 4/4 [==============================] - 5s 1s/step - loss: 0.6755 - accuracy: 0.4844 - val_loss: 0.6609 - val_accuracy: 0.4167 Epoch 14/50 4/4 [==============================] - 5s 1s/step - loss: 0.6692 - accuracy: 0.4844 - val_loss: 0.6576 - val_accuracy: 0.4167 Epoch 15/50 4/4 [==============================] - 5s 1s/step - loss: 0.6629 - accuracy: 0.4844 - val_loss: 0.6432 - val_accuracy: 0.4167 Epoch 16/50 4/4 [==============================] - 5s 1s/step - loss: 0.6631 - accuracy: 0.4844 - val_loss: 0.6411 - val_accuracy: 0.4167 Epoch 17/50 4/4 [==============================] - 5s 1s/step - loss: 0.6536 - accuracy: 0.4844 - val_loss: 0.6328 - val_accuracy: 0.4167 Epoch 18/50 4/4 [==============================] - 5s 1s/step - loss: 0.6487 - accuracy: 0.5000 - val_loss: 0.6253 - val_accuracy: 0.4167 Epoch 19/50 4/4 [==============================] - 5s 1s/step - loss: 0.6398 - accuracy: 0.5312 - val_loss: 0.6310 - val_accuracy: 0.4167 Epoch 20/50 4/4 [==============================] - 5s 1s/step - loss: 0.6364 - accuracy: 0.5312 - val_loss: 0.6220 - val_accuracy: 0.5000 Epoch 21/50 4/4 [==============================] - 5s 1s/step - loss: 0.6238 - accuracy: 0.6250 - val_loss: 0.5985 - val_accuracy: 0.6667 Epoch 22/50 4/4 [==============================] - 5s 1s/step - loss: 0.6080 - accuracy: 0.7188 - val_loss: 0.5843 - val_accuracy: 0.6667 Epoch 23/50 4/4 [==============================] - 5s 1s/step - loss: 0.6070 - accuracy: 0.6719 - val_loss: 0.6036 - val_accuracy: 0.6667 Epoch 24/50 4/4 [==============================] - 5s 1s/step - loss: 0.5817 - accuracy: 0.7969 - val_loss: 0.5762 - val_accuracy: 0.7500 Epoch 25/50 4/4 [==============================] - 5s 1s/step - loss: 0.5902 - accuracy: 0.7500 - val_loss: 0.5790 - val_accuracy: 0.7500 Epoch 26/50 4/4 [==============================] - 5s 1s/step - loss: 0.5689 - accuracy: 0.7656 - val_loss: 0.5613 - val_accuracy: 0.6667 Epoch 27/50 4/4 [==============================] - 5s 1s/step - loss: 0.5568 - accuracy: 0.8281 - val_loss: 0.5489 - val_accuracy: 0.8333 Epoch 28/50 4/4 [==============================] - 5s 1s/step - loss: 0.5370 - accuracy: 0.8906 - val_loss: 0.5487 - val_accuracy: 0.6667 Epoch 29/50 4/4 [==============================] - 5s 1s/step - loss: 0.5440 - accuracy: 0.8750 - val_loss: 0.5189 - val_accuracy: 0.9167 Epoch 30/50 4/4 [==============================] - 5s 1s/step - loss: 0.5215 - accuracy: 0.8906 - val_loss: 0.5219 - val_accuracy: 0.9167 Epoch 31/50 4/4 [==============================] - 5s 1s/step - loss: 0.5002 - accuracy: 0.9062 - val_loss: 0.5261 - val_accuracy: 0.7500 Epoch 32/50 4/4 [==============================] - 5s 1s/step - loss: 0.4765 - accuracy: 0.8906 - val_loss: 0.4910 - val_accuracy: 0.7500 Epoch 33/50 4/4 [==============================] - 5s 1s/step - loss: 0.4976 - accuracy: 0.8906 - val_loss: 0.5319 - val_accuracy: 0.8333 Epoch 34/50 4/4 [==============================] - 5s 1s/step - loss: 0.4840 - accuracy: 0.8438 - val_loss: 0.4400 - val_accuracy: 0.8333 Epoch 35/50 4/4 [==============================] - 5s 1s/step - loss: 0.4463 - accuracy: 0.9219 - val_loss: 0.4438 - val_accuracy: 0.9167 Epoch 36/50 4/4 [==============================] - 5s 1s/step - loss: 0.4382 - accuracy: 0.8906 - val_loss: 0.4299 - val_accuracy: 1.0000 Epoch 37/50 4/4 [==============================] - 5s 1s/step - loss: 0.4484 - accuracy: 0.8438 - val_loss: 0.4481 - val_accuracy: 0.8333 Epoch 38/50 4/4 [==============================] - 5s 1s/step - loss: 0.4153 - accuracy: 0.9375 - val_loss: 0.3804 - val_accuracy: 1.0000 Epoch 39/50 4/4 [==============================] - 5s 1s/step - loss: 0.4134 - accuracy: 0.9219 - val_loss: 0.4019 - val_accuracy: 0.8333 Epoch 40/50 4/4 [==============================] - 5s 1s/step - loss: 0.4112 - accuracy: 0.8594 - val_loss: 0.3829 - val_accuracy: 0.9167 Epoch 41/50 4/4 [==============================] - 5s 1s/step - loss: 0.3679 - accuracy: 0.9219 - val_loss: 0.3583 - val_accuracy: 0.9167 Epoch 42/50 4/4 [==============================] - 5s 1s/step - loss: 0.4059 - accuracy: 0.9062 - val_loss: 0.3914 - val_accuracy: 0.9167 Epoch 43/50 4/4 [==============================] - 5s 1s/step - loss: 0.3800 - accuracy: 0.8906 - val_loss: 0.3482 - val_accuracy: 0.9167 Epoch 44/50 4/4 [==============================] - 5s 1s/step - loss: 0.3772 - accuracy: 0.8750 - val_loss: 0.3994 - val_accuracy: 0.8333 Epoch 45/50 4/4 [==============================] - 5s 1s/step - loss: 0.3428 - accuracy: 0.9375 - val_loss: 0.3678 - val_accuracy: 0.9167 Epoch 46/50 4/4 [==============================] - 5s 1s/step - loss: 0.3529 - accuracy: 0.9219 - val_loss: 0.2957 - val_accuracy: 1.0000 Epoch 47/50 4/4 [==============================] - 5s 1s/step - loss: 0.3429 - accuracy: 0.9219 - val_loss: 0.3239 - val_accuracy: 0.9167 Epoch 48/50 4/4 [==============================] - 5s 1s/step - loss: 0.3162 - accuracy: 0.9375 - val_loss: 0.3493 - val_accuracy: 0.9167 Epoch 49/50 4/4 [==============================] - 5s 1s/step - loss: 0.3009 - accuracy: 0.9375 - val_loss: 0.4720 - val_accuracy: 0.6667 Epoch 50/50 4/4 [==============================] - 5s 1s/step - loss: 0.2985 - accuracy: 0.9375 - val_loss: 0.3921 - val_accuracy: 1.0000
loss_plotter.plot(training_histories)
acc_plotter.plot(training_histories)