Public Constructors
Public Methods
static <T extends TNumber, U extends TNumber> Operand<T> |
softmaxCrossEntropyWithLogits(Scope scope, Operand<U> labels, Operand<T> logits, int axis)
Computes softmax cross entropy between
logits and labels . |
Inherited Methods
Public Constructors
public SoftmaxCrossEntropyWithLogits ()
Public Methods
public static Operand<T> softmaxCrossEntropyWithLogits (Scope scope, Operand<U> labels, Operand<T> logits, int axis)
Computes softmax cross entropy between logits
and labels
.
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
NOTE:
While the classes are mutually exclusive, their probabilities need not be. All that is
required is that each row of labels
is a valid probability distribution. If they
are not, the computation of the gradient will be incorrect.
If using exclusive labels
(wherein one and only one class is true at a time),
see ERROR(/org.tensorflow.op.NnOps#sparseSoftmaxCrossEntropyWithLogits)
Usage:
Operand<TFloat32> logits = tf.constant(new float[][] { {4.0F, 2.0F, 1.0F}, {0.0F, 5.0F, 1.0F} } ); Operand<TFloat32> labels = tf.constant(new float[][] { {1.0F, 0.0F, 0.0F}, {0.0F, 0.8F, 0.2F} } ); Operand<TFloat32> output = tf.nn.softmaxCrossEntropyWithLogits(labels, logits, -1); // output Shape = [2] // dataType = FLOAT (1) // values { 0.169846, 0.824745 }
Backpropagation will happen into both logits
and labels
. To
disallow backpropagation into labels
, pass label tensors through
tf.stopGradient
before feeding it to this function.
Parameters
scope | current scope |
---|---|
labels | Each vector along the class dimension should hold a valid probability
distribution e.g. for the case in which labels are of shape [batch_size, num_classes]
, each row of labels[i] must be a valid probability distribution. |
logits | Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities. |
axis | The class dimension. -1 is the last dimension. |
Returns
- the softmax cross entropy loss. Its type is the same as
logits
and its shape is the same aslabels
except that it does not have the last dimension oflabels
.