Module: tf_agents.utils.eager_utils
Stay organized with collections
Save and categorize content based on your preferences.
Common utilities for TF-Agents.
Example of usage:
from tf_agents.utils import eager_utils
@eager_utils.run_in_graph_and_eager_modes
def loss_fn(x, y):
v = tf.get_variable('v', initializer=tf.ones_initializer(), shape=())
return v + x - y
with tfe.graph_mode():
# loss and train_step are Tensors/Ops in the graph
loss_op = loss_fn(inputs, labels)
train_step_op = eager_utils.create_train_step(loss_op, optimizer)
# Compute the loss and apply gradients to the variables using the optimizer.
with tf.Session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())
for _ in range(num_train_steps):
loss_value = sess.run(train_step_op)
with tfe.eager_mode():
# loss and train_step are lambda functions that can be called.
loss = loss_fn(inputs, labels)
train_step = eager_utils.create_train_step(loss, optimizer)
# Compute the loss and apply gradients to the variables using the optimizer.
for _ in range(num_train_steps):
loss_value = train_step()
Classes
class Future
: Converts a function or class method call into a future callable.
Functions
add_gradients_summaries(...)
: Add summaries to gradients.
add_variables_summaries(...)
: Add summaries for variables.
clip_gradient_norms(...)
: Clips the gradients by the given value.
clip_gradient_norms_fn(...)
: Returns a transform_grads_fn
function for gradient clipping.
create_train_op(...)
: Creates an Operation
that evaluates the gradients and returns the loss.
create_train_step(...)
: Creates a train_step that evaluates the gradients and returns the loss.
dataset_iterator(...)
: Constructs a Dataset
iterator.
future_in_eager_mode(...)
: Decorator that allow a function/method to run in graph and in eager modes.
get_next(...)
: Returns the next element in a Dataset
iterator.
has_self_cls_arg(...)
: Checks if it is method which takes self/cls as the first argument.
is_unbound(...)
: Checks if it is an unbounded method.
np_function(...)
: Decorator that allow a numpy function to be used in Eager and Graph modes.
Other Members |
absolute_import
|
Instance of __future__._Feature
|
division
|
Instance of __future__._Feature
|
print_function
|
Instance of __future__._Feature
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# Module: tf_agents.utils.eager_utils\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/utils/eager_utils.py) |\n\nCommon utilities for TF-Agents.\n\n#### Example of usage:\n\n from tf_agents.utils import eager_utils\n\n @eager_utils.run_in_graph_and_eager_modes\n def loss_fn(x, y):\n v = tf.get_variable('v', initializer=tf.ones_initializer(), shape=())\n return v + x - y\n\n with tfe.graph_mode():\n # loss and train_step are Tensors/Ops in the graph\n loss_op = loss_fn(inputs, labels)\n train_step_op = eager_utils.create_train_step(loss_op, optimizer)\n # Compute the loss and apply gradients to the variables using the optimizer.\n with tf.Session() as sess:\n sess.run(tf.compat.v1.global_variables_initializer())\n for _ in range(num_train_steps):\n loss_value = sess.run(train_step_op)\n\n with tfe.eager_mode():\n # loss and train_step are lambda functions that can be called.\n loss = loss_fn(inputs, labels)\n train_step = eager_utils.create_train_step(loss, optimizer)\n # Compute the loss and apply gradients to the variables using the optimizer.\n for _ in range(num_train_steps):\n loss_value = train_step()\n\nClasses\n-------\n\n[`class Future`](../../tf_agents/utils/eager_utils/Future): Converts a function or class method call into a future callable.\n\nFunctions\n---------\n\n[`add_gradients_summaries(...)`](../../tf_agents/utils/eager_utils/add_gradients_summaries): Add summaries to gradients.\n\n[`add_variables_summaries(...)`](../../tf_agents/utils/eager_utils/add_variables_summaries): Add summaries for variables.\n\n[`clip_gradient_norms(...)`](../../tf_agents/utils/eager_utils/clip_gradient_norms): Clips the gradients by the given value.\n\n[`clip_gradient_norms_fn(...)`](../../tf_agents/utils/eager_utils/clip_gradient_norms_fn): Returns a `transform_grads_fn` function for gradient clipping.\n\n[`create_train_op(...)`](../../tf_agents/utils/eager_utils/create_train_op): Creates an `Operation` that evaluates the gradients and returns the loss.\n\n[`create_train_step(...)`](../../tf_agents/utils/eager_utils/create_train_step): Creates a train_step that evaluates the gradients and returns the loss.\n\n[`dataset_iterator(...)`](../../tf_agents/utils/eager_utils/dataset_iterator): Constructs a `Dataset` iterator.\n\n[`future_in_eager_mode(...)`](../../tf_agents/utils/eager_utils/future_in_eager_mode): Decorator that allow a function/method to run in graph and in eager modes.\n\n[`get_next(...)`](../../tf_agents/utils/eager_utils/get_next): Returns the next element in a `Dataset` iterator.\n\n[`has_self_cls_arg(...)`](../../tf_agents/utils/eager_utils/has_self_cls_arg): Checks if it is method which takes self/cls as the first argument.\n\n[`is_unbound(...)`](../../tf_agents/utils/eager_utils/is_unbound): Checks if it is an unbounded method.\n\n[`np_function(...)`](../../tf_agents/utils/eager_utils/np_function): Decorator that allow a numpy function to be used in Eager and Graph modes.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Other Members ------------- ||\n|-----------------|-----------------------------------|\n| absolute_import | Instance of `__future__._Feature` |\n| division | Instance of `__future__._Feature` |\n| print_function | Instance of `__future__._Feature` |\n\n\u003cbr /\u003e"]]