Module: tf.compat.v1.losses
Stay organized with collections
Save and categorize content based on your preferences.
Public API for tf._api.v2.losses namespace
Classes
class Reduction
: Types of loss reduction.
Functions
absolute_difference(...)
: Adds an Absolute Difference loss to the training procedure.
add_loss(...)
: Adds a externally defined loss to the collection of losses.
compute_weighted_loss(...)
: Computes the weighted loss.
cosine_distance(...)
: Adds a cosine-distance loss to the training procedure. (deprecated arguments)
get_losses(...)
: Gets the list of losses from the loss_collection.
get_regularization_loss(...)
: Gets the total regularization loss.
get_regularization_losses(...)
: Gets the list of regularization losses.
get_total_loss(...)
: Returns a tensor whose value represents the total loss.
hinge_loss(...)
: Adds a hinge loss to the training procedure.
huber_loss(...)
: Adds a Huber Loss term to the training procedure.
log_loss(...)
: Adds a Log Loss term to the training procedure.
mean_pairwise_squared_error(...)
: Adds a pairwise-errors-squared loss to the training procedure.
mean_squared_error(...)
: Adds a Sum-of-Squares loss to the training procedure.
sigmoid_cross_entropy(...)
: Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.
softmax_cross_entropy(...)
: Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.
sparse_softmax_cross_entropy(...)
: Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits
.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# Module: tf.compat.v1.losses\n\n\u003cbr /\u003e\n\nPublic API for tf._api.v2.losses namespace\n\nClasses\n-------\n\n[`class Reduction`](../../../tf/compat/v1/losses/Reduction): Types of loss reduction.\n\nFunctions\n---------\n\n[`absolute_difference(...)`](../../../tf/compat/v1/losses/absolute_difference): Adds an Absolute Difference loss to the training procedure.\n\n[`add_loss(...)`](../../../tf/compat/v1/losses/add_loss): Adds a externally defined loss to the collection of losses.\n\n[`compute_weighted_loss(...)`](../../../tf/compat/v1/losses/compute_weighted_loss): Computes the weighted loss.\n\n[`cosine_distance(...)`](../../../tf/compat/v1/losses/cosine_distance): Adds a cosine-distance loss to the training procedure. (deprecated arguments)\n\n[`get_losses(...)`](../../../tf/compat/v1/losses/get_losses): Gets the list of losses from the loss_collection.\n\n[`get_regularization_loss(...)`](../../../tf/compat/v1/losses/get_regularization_loss): Gets the total regularization loss.\n\n[`get_regularization_losses(...)`](../../../tf/compat/v1/losses/get_regularization_losses): Gets the list of regularization losses.\n\n[`get_total_loss(...)`](../../../tf/compat/v1/losses/get_total_loss): Returns a tensor whose value represents the total loss.\n\n[`hinge_loss(...)`](../../../tf/compat/v1/losses/hinge_loss): Adds a hinge loss to the training procedure.\n\n[`huber_loss(...)`](../../../tf/compat/v1/losses/huber_loss): Adds a [Huber Loss](https://en.wikipedia.org/wiki/Huber_loss) term to the training procedure.\n\n[`log_loss(...)`](../../../tf/compat/v1/losses/log_loss): Adds a Log Loss term to the training procedure.\n\n[`mean_pairwise_squared_error(...)`](../../../tf/compat/v1/losses/mean_pairwise_squared_error): Adds a pairwise-errors-squared loss to the training procedure.\n\n[`mean_squared_error(...)`](../../../tf/compat/v1/losses/mean_squared_error): Adds a Sum-of-Squares loss to the training procedure.\n\n[`sigmoid_cross_entropy(...)`](../../../tf/compat/v1/losses/sigmoid_cross_entropy): Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.\n\n[`softmax_cross_entropy(...)`](../../../tf/compat/v1/losses/softmax_cross_entropy): Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.\n\n[`sparse_softmax_cross_entropy(...)`](../../../tf/compat/v1/losses/sparse_softmax_cross_entropy): Cross-entropy loss using [`tf.nn.sparse_softmax_cross_entropy_with_logits`](../../../tf/nn/sparse_softmax_cross_entropy_with_logits)."]]