Computes Kullback-Leibler divergence loss between labels and predictions.
loss = labels * log(labels / predictions)
Standalone usage:
Operand<TFloat32> labels =
tf.constant(new float[][] { {0.f, 1.f}, {0.f, 0.f} });
Operand<TFloat32> predictions =
tf.constant(new float[][] { {0.6f, 0.4f}, {0.4f, 0.6f} });
KLDivergence kld = new KLDivergence(tf);
Operand<TFloat32> result = kld.call(labels, predictions);
// produces 0.458
Calling with sample weight:
Operand<TFloat32> sampleWeight = tf.constant(new float[] {0.8f, 0.2f});
Operand<TFloat32> result = kld.call(labels, predictions, sampleWeight);
// produces 0.366f
Using SUM reduction type:
KLDivergence kld = new KLDivergence(tf, Reduction.SUM);
Operand<TFloat32> result = kld.call(labels, predictions);
// produces 0.916f
Using NONE reduction type:
KLDivergence kld = new KLDivergence(tf, Reduction.NONE);
Operand<TFloat32> result = kld.call(labels, predictions);
// produces [0.916f, -3.08e-06f]
See Also
Inherited Fields
Public Constructors
|
KLDivergence(Ops tf)
Creates a Kullback Leibler Divergence Loss using
getSimpleName() as the loss name
and a Loss Reduction of REDUCTION_DEFAULT |
|
|
KLDivergence(Ops tf, Reduction reduction)
Creates a Kullback Leibler Divergence Loss Loss using
getSimpleName() as the loss
name |
|
Public Methods
| <T extends TNumber> Operand<T> |
Inherited Methods
Public Constructors
public KLDivergence (Ops tf)
Creates a Kullback Leibler Divergence Loss using getSimpleName() as the loss name
and a Loss Reduction of REDUCTION_DEFAULT
Parameters
| tf | the TensorFlow Ops |
|---|
public KLDivergence (Ops tf, Reduction reduction)
Creates a Kullback Leibler Divergence Loss Loss using getSimpleName() as the loss
name
Parameters
| tf | the TensorFlow Ops |
|---|---|
| reduction | Type of Reduction to apply to the loss. |
public KLDivergence (Ops tf, String name, Reduction reduction)
Creates a Kullback Leibler Divergence Loss
Parameters
| tf | the TensorFlow Ops |
|---|---|
| name | the name of the loss |
| reduction | Type of Reduction to apply to the loss. |
Public Methods
public Operand<T> call (Operand<? extends TNumber> labels, Operand<T> predictions, Operand<T> sampleWeights)
Generates an Operand that calculates the loss.
Parameters
| labels | the truth values or labels |
|---|---|
| predictions | the predictions |
| sampleWeights | Optional sampleWeights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If SampleWeights is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the SampleWeights vector. If the shape of SampleWeights is [batch_size, d0, .. dN-1] (or can be broadcast to this shape), then each loss element of predictions is scaled by the corresponding value of SampleWeights. (Note on dN-1: all loss functions reduce by 1 dimension, usually axis=-1.) |
Returns
- the loss