The Adjoint differentiator follows along with the methods described here:
arXiv:1912.10877 and
doi: 10.1111/j.1365-246X.2006.02978.x.
The Adjoint method differentiates the input circuits in roughly one forward
and backward pass over the circuits, to calculate the gradient of
a symbol only a constant number of gate operations need to be applied to the
circuits state. When the number of parameters in a circuit is very large,
this differentiator performs much better than all the others found in TFQ.
my_op=tfq.get_expectation_op()adjoint_differentiator=tfq.differentiators.Adjoint()# Get an expectation op, with this differentiator attached.op=adjoint_differentiator.generate_differentiable_op(analytic_op=my_op)qubit=cirq.GridQubit(0,0)circuit=tfq.convert_to_tensor([cirq.Circuit(cirq.X(qubit)**sympy.Symbol('alpha'))])psums=tfq.convert_to_tensor([[cirq.Z(qubit)]])symbol_values=np.array([[0.123]],dtype=np.float32)# Calculate tfq gradient.symbol_values_t=tf.convert_to_tensor(symbol_values)symbol_names=tf.convert_to_tensor(['alpha'])withtf.GradientTape()asg:g.watch(symbol_values_t)expectations=op(circuit,symbol_names,symbol_values_t,psums)grads=g.gradient(expectations,symbol_values_t)gradstf.Tensor([[-1.1839]],shape=(1,1),dtype=float32)
This is called at graph runtime by TensorFlow. differentiate_sampled
calls he inheriting differentiator's get_gradient_circuits and uses
those components to construct the gradient.
Args
programs
tf.Tensor of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names
tf.Tensor of strings with shape [n_params], which
is used to specify the order in which the values in
symbol_values should be placed inside of the circuits in
programs.
symbol_values
tf.Tensor of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by symbol_names.
pauli_sums
tf.Tensor of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
num_samples
tf.Tensor of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
forward_pass_vals
tf.Tensor of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad
tf.Tensor of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns
A tf.Tensor with the same shape as symbol_values representing
the gradient backpropageted to the symbol_values input of the op
you are differentiating through.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-17 UTC."],[],[],null,["# tfq.differentiators.Adjoint\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/quantum/blob/v0.7.3/tensorflow_quantum/python/differentiators/adjoint.py#L22-L110) |\n\nDifferentiate a circuit with respect to its inputs by adjoint method.\n\nInherits From: [`Differentiator`](../../tfq/differentiators/Differentiator)\n| **Caution:** This differentiator is only compatible with analytic expectation calculations and the native C++ ops (`backend = None`). The methods used by this differentiation techniques can not be realized easily on a real device.\n\nThe Adjoint differentiator follows along with the methods described here:\n[arXiv:1912.10877](https://arxiv.org/abs/1912.10877) and\n[doi: 10.1111/j.1365-246X.2006.02978.x](https://academic.oup.com/gji/article-pdf/167/2/495/1492368/167-2-495.pdf).\nThe Adjoint method differentiates the input circuits in roughly one forward\nand backward pass over the circuits, to calculate the gradient of\na symbol only a constant number of gate operations need to be applied to the\ncircuits state. When the number of parameters in a circuit is very large,\nthis differentiator performs much better than all the others found in TFQ. \n\n my_op = tfq.get_expectation_op()\n adjoint_differentiator = tfq.differentiators.Adjoint()\n # Get an expectation op, with this differentiator attached.\n op = adjoint_differentiator.generate_differentiable_op(\n analytic_op=my_op\n )\n qubit = cirq.GridQubit(0, 0)\n circuit = tfq.convert_to_tensor([\n cirq.Circuit(cirq.X(qubit) ** sympy.Symbol('alpha'))\n ])\n psums = tfq.convert_to_tensor([[cirq.Z(qubit)]])\n symbol_values = np.array([[0.123]], dtype=np.float32)\n # Calculate tfq gradient.\n symbol_values_t = tf.convert_to_tensor(symbol_values)\n symbol_names = tf.convert_to_tensor(['alpha'])\n with tf.GradientTape() as g:\n g.watch(symbol_values_t)\n expectations = op(circuit, symbol_names, symbol_values_t, psums\n )\n grads = g.gradient(expectations, symbol_values_t)\n grads\n tf.Tensor([[-1.1839]], shape=(1, 1), dtype=float32)\n\nMethods\n-------\n\n### `differentiate_analytic`\n\n[View source](https://github.com/tensorflow/quantum/blob/v0.7.3/tensorflow_quantum/python/differentiators/adjoint.py#L98-L103) \n\n @tf.function\n differentiate_analytic(\n programs, symbol_names, symbol_values, pauli_sums, forward_pass_vals, grad\n )\n\n### `differentiate_sampled`\n\n[View source](https://github.com/tensorflow/quantum/blob/v0.7.3/tensorflow_quantum/python/differentiators/adjoint.py#L105-L110) \n\n differentiate_sampled(\n programs,\n symbol_names,\n symbol_values,\n pauli_sums,\n num_samples,\n forward_pass_vals,\n grad\n )\n\nDifferentiate a circuit with sampled expectation.\n\nThis is called at graph runtime by TensorFlow. `differentiate_sampled`\ncalls he inheriting differentiator's `get_gradient_circuits` and uses\nthose components to construct the gradient.\n| **Note:** the default implementation does not use `forward_pass_vals`; the inheriting differentiator is free to override the default implementation and use this argument if desired.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `programs` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of strings with shape \\[batch_size\\] containing the string representations of the circuits to be executed. |\n| `symbol_names` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of strings with shape \\[n_params\\], which is used to specify the order in which the values in `symbol_values` should be placed inside of the circuits in `programs`. |\n| `symbol_values` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of real numbers with shape \\[batch_size, n_params\\] specifying parameter values to resolve into the circuits specified by programs, following the ordering dictated by `symbol_names`. |\n| `pauli_sums` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of strings with shape \\[batch_size, n_ops\\] containing the string representation of the operators that will be used on all of the circuits in the expectation calculations. |\n| `num_samples` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of positive integers representing the number of samples per term in each term of pauli_sums used during the forward pass. |\n| `forward_pass_vals` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of real numbers with shape \\[batch_size, n_ops\\] containing the output of the forward pass through the op you are differentiating. |\n| `grad` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of real numbers with shape \\[batch_size, n_ops\\] representing the gradient backpropagated to the output of the op you are differentiating through. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) with the same shape as `symbol_values` representing the gradient backpropageted to the `symbol_values` input of the op you are differentiating through. ||\n\n\u003cbr /\u003e\n\n### `generate_differentiable_op`\n\n[View source](https://github.com/tensorflow/quantum/blob/v0.7.3/tensorflow_quantum/python/differentiators/adjoint.py#L65-L89) \n\n generate_differentiable_op(\n *, sampled_op=None, analytic_op=None\n )\n\nGenerate a differentiable op by attaching self to an op.\n\nSee [`tfq.differentiators.Differentiator`](../../tfq/differentiators/Differentiator). This has been partially\nre-implemented by the Adjoint differentiator to disallow the\n`sampled_op` input.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------------|------------------------------------------------------------------------------------------------------------------|\n| `sampled_op` | A `callable` op that you want to make differentiable using this differentiator's `differentiate_sampled` method. |\n| `analytic_op` | A `callable` op that you want to make differentiable using this differentiators `differentiate_analytic` method. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A `callable` op that who's gradients are now registered to be a call to this differentiators `differentiate_*` function. ||\n\n\u003cbr /\u003e\n\n### `get_gradient_circuits`\n\n[View source](https://github.com/tensorflow/quantum/blob/v0.7.3/tensorflow_quantum/python/differentiators/adjoint.py#L91-L96) \n\n @tf.function\n get_gradient_circuits(\n programs, symbol_names, symbol_values\n )\n\nSee base class description.\n\n### `refresh`\n\n[View source](https://github.com/tensorflow/quantum/blob/v0.7.3/tensorflow_quantum/python/differentiators/differentiator.py#L203-L209) \n\n refresh()\n\nRefresh this differentiator in order to use it with other ops."]]