Stay organized with collections
Save and categorize content based on your preferences.
tensorflow::
ops::
TensorArrayGrad
#include <data_flow_ops.h>
Creates a
TensorArray
for storing the gradients of values in the given handle.
Summary
If the given
TensorArray
gradient already exists, returns a reference to it.
Locks the size of the original
TensorArray
by disabling its dynamic size flag.
A note about the input flow_in:
The handle flow_in forces the execution of the gradient lookup to occur only after certain other operations have occurred. For example, when the forward
TensorArray
is dynamically sized, writes to this
TensorArray
may resize the object. The gradient
TensorArray
is statically sized based on the size of the forward
TensorArray
when this operation executes. Furthermore, the size of the forward
TensorArray
is frozen by this call. As a result, the flow is used to ensure that the call to generate the gradient
TensorArray
only happens after all writes are executed.
In the case of dynamically sized TensorArrays, gradient computation should only be performed on read operations that have themselves been chained via flow to occur only after all writes have executed. That way the final size of the forward
TensorArray
is known when this operation is called.
A note about the source attribute:
TensorArray
gradient calls use an accumulator
TensorArray
object. If multiple gradients are calculated and run in the same session, the multiple gradient nodes may accidentally flow through the same accumulator
TensorArray
. This double counts and generally breaks the
TensorArray
gradient flow.
The solution is to identify which gradient call this particular
TensorArray
gradient is being called in. This is performed by identifying a unique string (e.g. "gradients", "gradients_1", ...) from the input gradient
Tensor
's name. This string is used as a suffix when creating the
TensorArray
gradient object here (the attribute
source
).
The attribute
source
is added as a suffix to the forward
TensorArray
's name when performing the creation / lookup, so that each separate gradient calculation gets its own
TensorArray
accumulator.
Args:
-
scope: A
Scope
object
-
handle: The handle to the forward
TensorArray
.
-
flow_in: A float scalar that enforces proper chaining of operations.
-
source: The gradient source string, used to decide which gradient
TensorArray
to return.
Returns:
Public attributes
Public functions
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-05-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2021-05-14 UTC."],[],[],null,["# tensorflow::ops::TensorArrayGrad Class Reference\n\ntensorflow::\nops::\nTensorArrayGrad\n==================================\n\n`\n#include \u003cdata_flow_ops.h\u003e\n`\n\n\nCreates a\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nfor storing the gradients of values in the given handle.\n\nSummary\n-------\n\n\nIf the given\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\ngradient already exists, returns a reference to it.\n\n\nLocks the size of the original\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nby disabling its dynamic size flag.\n\n\n**A note about the input flow_in:**\n\n\nThe handle flow_in forces the execution of the gradient lookup to occur only after certain other operations have occurred. For example, when the forward\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nis dynamically sized, writes to this\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nmay resize the object. The gradient\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nis statically sized based on the size of the forward\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nwhen this operation executes. Furthermore, the size of the forward\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nis frozen by this call. As a result, the flow is used to ensure that the call to generate the gradient\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nonly happens after all writes are executed.\n\n\nIn the case of dynamically sized TensorArrays, gradient computation should only be performed on read operations that have themselves been chained via flow to occur only after all writes have executed. That way the final size of the forward\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nis known when this operation is called.\n\n\n**A note about the source attribute:**\n\n\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\ngradient calls use an accumulator\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\nobject. If multiple gradients are calculated and run in the same session, the multiple gradient nodes may accidentally flow through the same accumulator\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\n. This double counts and generally breaks the\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\ngradient flow.\n\n\nThe solution is to identify which gradient call this particular\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\ngradient is being called in. This is performed by identifying a unique string (e.g. \"gradients\", \"gradients_1\", ...) from the input gradient\n[Tensor](/versions/r2.5/api_docs/cc/class/tensorflow/tensor#classtensorflow_1_1_tensor)\n's name. This string is used as a suffix when creating the\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\ngradient object here (the attribute\n`\nsource\n`\n).\n\n\nThe attribute\n`\nsource\n`\nis added as a suffix to the forward\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\n's name when performing the creation / lookup, so that each separate gradient calculation gets its own\n[TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array)\naccumulator.\n\n\nArgs:\n\n- scope: A [Scope](/versions/r2.5/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- handle: The handle to the forward [TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array) .\n- flow_in: A float scalar that enforces proper chaining of operations.\n- source: The gradient source string, used to decide which gradient [TensorArray](/versions/r2.5/api_docs/cc/class/tensorflow/ops/tensor-array#classtensorflow_1_1ops_1_1_tensor_array) to return.\n\n\u003cbr /\u003e\n\n\nReturns:\n\n- `\n `[Output](/versions/r2.5/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)`\n ` grad_handle\n- `\n `[Output](/versions/r2.5/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)`\n ` flow_out\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| ` `[TensorArrayGrad](#classtensorflow_1_1ops_1_1_tensor_array_grad_1a6240f50f9c7efdcf3bf8d48c4218d27b)` (const :: `[tensorflow::Scope](/versions/r2.5/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` handle, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` flow_in, StringPiece source) ` ||\n\n| ### Public attributes ||\n|-------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|\n| ` `[flow_out](#classtensorflow_1_1ops_1_1_tensor_array_grad_1a2499ef8bb9c633df24389a51f37654da)` ` | ` :: `[tensorflow::Output](/versions/r2.5/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)` ` |\n| ` `[grad_handle](#classtensorflow_1_1ops_1_1_tensor_array_grad_1ab5be040d777eb52f767d58a83a3a345d)` ` | ` :: `[tensorflow::Output](/versions/r2.5/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)` ` |\n| ` `[operation](#classtensorflow_1_1ops_1_1_tensor_array_grad_1ae7c0b0022fc4a44d321bf759c55413c2)` ` | ` `[Operation](/versions/r2.5/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)` ` |\n\nPublic attributes\n-----------------\n\n### flow_out\n\n```scdoc\n::tensorflow::Output flow_out\n``` \n\n### grad_handle\n\n```scdoc\n::tensorflow::Output grad_handle\n``` \n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### TensorArrayGrad\n\n```gdscript\n TensorArrayGrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input handle,\n ::tensorflow::Input flow_in,\n StringPiece source\n)\n```"]]