tf.compat.v1.losses.softmax_cross_entropy
Stay organized with collections
Save and categorize content based on your preferences.
Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.
tf.compat.v1.losses.softmax_cross_entropy(
onehot_labels, logits, weights=1.0, label_smoothing=0, scope=None,
loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS
)
weights
acts as a coefficient for the loss. If a scalar is provided,
then the loss is simply scaled by the given value. If weights
is a
tensor of shape [batch_size]
, then the loss weights apply to each
corresponding sample.
If label_smoothing
is nonzero, smooth the labels towards 1/num_classes:
new_onehot_labels = onehot_labels * (1 - label_smoothing)
+ label_smoothing / num_classes
Note that onehot_labels
and logits
must have the same shape,
e.g. [batch_size, num_classes]
. The shape of weights
must be
broadcastable to loss, whose shape is decided by the shape of logits
.
In case the shape of logits
is [batch_size, num_classes]
, loss is
a Tensor
of shape [batch_size]
.
Args |
onehot_labels
|
One-hot-encoded labels.
|
logits
|
Logits outputs of the network.
|
weights
|
Optional Tensor that is broadcastable to loss.
|
label_smoothing
|
If greater than 0 then smooth the labels.
|
scope
|
the scope for the operations performed in computing the loss.
|
loss_collection
|
collection to which the loss will be added.
|
reduction
|
Type of reduction to apply to loss.
|
Returns |
Weighted loss Tensor of the same type as logits . If reduction is
NONE , this has shape [batch_size] ; otherwise, it is scalar.
|
Raises |
ValueError
|
If the shape of logits doesn't match that of onehot_labels
or if the shape of weights is invalid or if weights is None. Also if
onehot_labels or logits is None.
|
Eager Compatibility
The loss_collection
argument is ignored when executing eagerly. Consider
holding on to the return value or collecting losses via a tf.keras.Model
.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.compat.v1.losses.softmax_cross_entropy\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/ops/losses/losses_impl.py#L713-L781) |\n\nCreates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2. \n\n tf.compat.v1.losses.softmax_cross_entropy(\n onehot_labels, logits, weights=1.0, label_smoothing=0, scope=None,\n loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS\n )\n\n`weights` acts as a coefficient for the loss. If a scalar is provided,\nthen the loss is simply scaled by the given value. If `weights` is a\ntensor of shape `[batch_size]`, then the loss weights apply to each\ncorresponding sample.\n\nIf `label_smoothing` is nonzero, smooth the labels towards 1/num_classes:\nnew_onehot_labels = onehot_labels \\* (1 - label_smoothing) \n\n + label_smoothing / num_classes\n\nNote that `onehot_labels` and `logits` must have the same shape,\ne.g. `[batch_size, num_classes]`. The shape of `weights` must be\nbroadcastable to loss, whose shape is decided by the shape of `logits`.\nIn case the shape of `logits` is `[batch_size, num_classes]`, loss is\na `Tensor` of shape `[batch_size]`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|---------------------------------------------------------------|\n| `onehot_labels` | One-hot-encoded labels. |\n| `logits` | Logits outputs of the network. |\n| `weights` | Optional `Tensor` that is broadcastable to loss. |\n| `label_smoothing` | If greater than 0 then smooth the labels. |\n| `scope` | the scope for the operations performed in computing the loss. |\n| `loss_collection` | collection to which the loss will be added. |\n| `reduction` | Type of reduction to apply to loss. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If the shape of `logits` doesn't match that of `onehot_labels` or if the shape of `weights` is invalid or if `weights` is None. Also if `onehot_labels` or `logits` is None. |\n\n\u003cbr /\u003e\n\n#### Eager Compatibility\n\nThe `loss_collection` argument is ignored when executing eagerly. Consider\nholding on to the return value or collecting losses via a [`tf.keras.Model`](../../../../tf/keras/Model)."]]