View source on GitHub |
Computes the Sigmoid cross-entropy loss between y_true
and y_pred
.
tfr.keras.losses.SigmoidCrossEntropyLoss(
reduction: tf.losses.Reduction = tf.losses.Reduction.AUTO,
name: Optional[str] = None,
ragged: bool = False
)
loss = -(y_true log(sigmoid(y_pred)) + (1 - y_true) log(1 - sigmoid(y_pred)))
Standalone usage:
y_true = [[1., 0.]]
y_pred = [[0.6, 0.8]]
loss = tfr.keras.losses.SigmoidCrossEntropyLoss()
loss(y_true, y_pred).numpy()
0.8042943
# Using ragged tensors
y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])
y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])
loss = tfr.keras.losses.SigmoidCrossEntropyLoss(ragged=True)
loss(y_true, y_pred).numpy()
0.64446354
Usage with the compile()
API:
model.compile(optimizer='sgd',
loss=tfr.keras.losses.SigmoidCrossEntropyLoss())
Definition:
\[ \mathcal{L}(\{y\}, \{s\}) = - \sum_{i} y_i \log(\text{sigmoid}(s_i)) + (1 - y_i) \log(1 - \text{sigmoid}(s_i)) \]
Args | |
---|---|
reduction
|
Type of tf.keras.losses.Reduction to apply to
loss. Default value is AUTO . AUTO indicates that the
reduction option will be determined by the usage context. For
almost all cases this defaults to SUM_OVER_BATCH_SIZE . When
used under a tf.distribute.Strategy , except via
Model.compile() and Model.fit() , using AUTO or
SUM_OVER_BATCH_SIZE will raise an error. Please see this
custom training tutorial
for more details.
|
name
|
Optional name for the instance. |
Methods
from_config
@classmethod
from_config( config )
Instantiates a Loss
from its config (output of get_config()
).
Args | |
---|---|
config
|
Output of get_config() .
|
Returns | |
---|---|
A Loss instance.
|
get_config
get_config() -> Dict[str, Any]
Returns the config dictionary for a Loss
instance.
__call__
__call__(
y_true: tfr.keras.model.TensorLike
,
y_pred: tfr.keras.model.TensorLike
,
sample_weight: Optional[utils.TensorLike] = None
) -> tf.Tensor
See tf.keras.losses.Loss.