Module: tf.losses
Stay organized with collections
Save and categorize content based on your preferences.
Loss operations for use in neural networks.
Classes
class Reduction
: Types of loss reduction.
Functions
absolute_difference(...)
: Adds an Absolute Difference loss to the training procedure.
add_loss(...)
: Adds a externally defined loss to the collection of losses.
compute_weighted_loss(...)
: Computes the weighted loss.
cosine_distance(...)
: Adds a cosine-distance loss to the training procedure. (deprecated arguments)
get_losses(...)
: Gets the list of losses from the loss_collection.
get_regularization_loss(...)
: Gets the total regularization loss.
get_regularization_losses(...)
: Gets the list of regularization losses.
get_total_loss(...)
: Returns a tensor whose value represents the total loss.
hinge_loss(...)
: Adds a hinge loss to the training procedure.
huber_loss(...)
: Adds a Huber Loss term to the training procedure.
log_loss(...)
: Adds a Log Loss term to the training procedure.
mean_pairwise_squared_error(...)
: Adds a pairwise-errors-squared loss to the training procedure.
mean_squared_error(...)
: Adds a Sum-of-Squares loss to the training procedure.
sigmoid_cross_entropy(...)
: Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.
softmax_cross_entropy(...)
: Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.
sparse_softmax_cross_entropy(...)
: Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits
.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# Module: tf.losses\n\n\u003cbr /\u003e\n\nLoss operations for use in neural networks.\n| **Note:** All the losses are added to the [`GraphKeys.LOSSES`](../tf/GraphKeys#LOSSES) collection by default.\n\nClasses\n-------\n\n[`class Reduction`](../tf/losses/Reduction): Types of loss reduction.\n\nFunctions\n---------\n\n[`absolute_difference(...)`](../tf/losses/absolute_difference): Adds an Absolute Difference loss to the training procedure.\n\n[`add_loss(...)`](../tf/losses/add_loss): Adds a externally defined loss to the collection of losses.\n\n[`compute_weighted_loss(...)`](../tf/losses/compute_weighted_loss): Computes the weighted loss.\n\n[`cosine_distance(...)`](../tf/losses/cosine_distance): Adds a cosine-distance loss to the training procedure. (deprecated arguments)\n\n[`get_losses(...)`](../tf/losses/get_losses): Gets the list of losses from the loss_collection.\n\n[`get_regularization_loss(...)`](../tf/losses/get_regularization_loss): Gets the total regularization loss.\n\n[`get_regularization_losses(...)`](../tf/losses/get_regularization_losses): Gets the list of regularization losses.\n\n[`get_total_loss(...)`](../tf/losses/get_total_loss): Returns a tensor whose value represents the total loss.\n\n[`hinge_loss(...)`](../tf/losses/hinge_loss): Adds a hinge loss to the training procedure.\n\n[`huber_loss(...)`](../tf/losses/huber_loss): Adds a Huber Loss term to the training procedure.\n\n[`log_loss(...)`](../tf/losses/log_loss): Adds a Log Loss term to the training procedure.\n\n[`mean_pairwise_squared_error(...)`](../tf/losses/mean_pairwise_squared_error): Adds a pairwise-errors-squared loss to the training procedure.\n\n[`mean_squared_error(...)`](../tf/losses/mean_squared_error): Adds a Sum-of-Squares loss to the training procedure.\n\n[`sigmoid_cross_entropy(...)`](../tf/losses/sigmoid_cross_entropy): Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.\n\n[`softmax_cross_entropy(...)`](../tf/losses/softmax_cross_entropy): Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.\n\n[`sparse_softmax_cross_entropy(...)`](../tf/losses/sparse_softmax_cross_entropy): Cross-entropy loss using [`tf.nn.sparse_softmax_cross_entropy_with_logits`](../tf/nn/sparse_softmax_cross_entropy_with_logits)."]]