View source on GitHub |
Adds a Sum-of-Squares loss to the training procedure.
tf.compat.v1.losses.mean_squared_error(
labels,
predictions,
weights=1.0,
scope=None,
loss_collection=ops.GraphKeys.LOSSES,
reduction=Reduction.SUM_BY_NONZERO_WEIGHTS
)
Migrate to TF2
tf.compat.v1.losses.mean_squared_error
is mostly compatible with eager
execution and tf.function
. But, the loss_collection
argument is
ignored when executing eagerly and no loss will be written to the loss
collections. You will need to either hold on to the return value manually
or rely on tf.keras.Model
loss tracking.
To switch to native TF2 style, instantiate the
tf.keras.losses.MeanSquaredError
class and call the object instead.
Structural Mapping to Native TF2
Before:
loss = tf.compat.v1.losses.mean_squared_error(
labels=labels,
predictions=predictions,
weights=weights,
reduction=reduction)
After:
loss_fn = tf.keras.losses.MeanSquaredError(
reduction=reduction)
loss = loss_fn(
y_true=labels,
y_pred=predictions,
sample_weight=weights)
How to Map Arguments
TF1 Arg Name | TF2 Arg Name | Note |
---|---|---|
labels |
y_true |
In __call__() method |
predictions |
y_pred |
In __call__() method |
weights
|
sample_weight
|
In __call__() method.
The shape requirements for sample_weight is different from
weights . Please check the argument definition for
details. |
scope |
Not supported | - |
loss_collection
|
Not supported | Losses should be tracked explicitly or with Keras APIs, for example, add_loss, instead of via collections |
reduction
|
reduction
|
In constructor. Value of
tf.compat.v1.losses.Reduction.SUM_OVER_BATCH_SIZE ,
tf.compat.v1.losses.Reduction.SUM ,
tf.compat.v1.losses.Reduction.NONE in
tf.compat.v1.losses.softmax_cross_entropy correspond to
tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE ,
tf.keras.losses.Reduction.SUM ,
tf.keras.losses.Reduction.NONE , respectively. If you
used other value for reduction , including the default value
tf.compat.v1.losses.Reduction.SUM_BY_NONZERO_WEIGHTS , there is
no directly corresponding value. Please modify the loss
implementation manually. |
Before & After Usage Example
Before:
y_true = [1, 2, 3]
y_pred = [1, 3, 5]
weights = [0, 1, 0.25]
# samples with zero-weight are excluded from calculation when `reduction`
# argument is set to default value `Reduction.SUM_BY_NONZERO_WEIGHTS`
tf.compat.v1.losses.mean_squared_error(
labels=y_true,
predictions=y_pred,
weights=weights).numpy()
1.0
tf.compat.v1.losses.mean_squared_error(
labels=y_true,
predictions=y_pred,
weights=weights,
reduction=tf.compat.v1.losses.Reduction.SUM_OVER_BATCH_SIZE).numpy()
0.66667
After:
y_true = [[1.0], [2.0], [3.0]]
y_pred = [[1.0], [3.0], [5.0]]
weights = [1, 1, 0.25]
mse = tf.keras.losses.MeanSquaredError(
reduction=tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE)
mse(y_true=y_true, y_pred=y_pred, sample_weight=weights).numpy()
0.66667
Description
Used in the notebooks
Used in the guide |
---|
weights
acts as a coefficient for the loss. If a scalar is provided, then
the loss is simply scaled by the given value. If weights
is a tensor of size
[batch_size]
, then the total loss for each sample of the batch is rescaled
by the corresponding element in the weights
vector. If the shape of
weights
matches the shape of predictions
, then the loss of each
measurable element of predictions
is scaled by the corresponding value of
weights
.
Returns | |
---|---|
Weighted loss float Tensor . If reduction is NONE , this has the same
shape as labels ; otherwise, it is scalar.
|
Raises | |
---|---|
ValueError
|
If the shape of predictions doesn't match that of labels or
if the shape of weights is invalid. Also if labels or predictions
is None.
|