tf.keras.losses.sparse_categorical_crossentropy
Stay organized with collections
Save and categorize content based on your preferences.
Computes the sparse categorical crossentropy loss.
tf.keras.losses.sparse_categorical_crossentropy(
y_true, y_pred, from_logits=False, axis=-1
)
Standalone usage:
y_true = [1, 2]
y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
assert loss.shape == (2,)
loss.numpy()
array([0.0513, 2.303], dtype=float32)
Args |
y_true
|
Ground truth values.
|
y_pred
|
The predicted values.
|
from_logits
|
Whether y_pred is expected to be a logits tensor. By default,
we assume that y_pred encodes a probability distribution.
|
axis
|
(Optional) Defaults to -1. The dimension along which the entropy is
computed.
|
Returns |
Sparse categorical crossentropy loss value.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-05-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2021-05-14 UTC."],[],[]]