tf.compat.v1.losses.mean_pairwise_squared_error
Stay organized with collections
Save and categorize content based on your preferences.
Adds a pairwise-errors-squared loss to the training procedure.
tf.compat.v1.losses.mean_pairwise_squared_error(
labels,
predictions,
weights=1.0,
scope=None,
loss_collection=ops.GraphKeys.LOSSES
)
Unlike mean_squared_error
, which is a measure of the differences between
corresponding elements of predictions
and labels
,
mean_pairwise_squared_error
is a measure of the differences between pairs of
corresponding elements of predictions
and labels
.
For example, if labels
=[a, b, c] and predictions
=[x, y, z], there are
three pairs of differences are summed to compute the loss:
loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3
Note that since the inputs are of shape [batch_size, d0, ... dN]
, the
corresponding pairs are computed within each batch sample but not across
samples within a batch. For example, if predictions
represents a batch of
16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs
is drawn from each image, but not across images.
weights
acts as a coefficient for the loss. If a scalar is provided, then
the loss is simply scaled by the given value. If weights
is a tensor of size
[batch_size]
, then the total loss for each sample of the batch is rescaled
by the corresponding element in the weights
vector.
Args |
labels
|
The ground truth output tensor, whose shape must match the shape of
predictions .
|
predictions
|
The predicted outputs, a tensor of size
[batch_size, d0, .. dN] where N+1 is the total number of dimensions in
predictions .
|
weights
|
Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches predictions .
|
scope
|
The scope for the operations performed in computing the loss.
|
loss_collection
|
collection to which the loss will be added.
|
Returns |
A scalar Tensor that returns the weighted loss.
|
Raises |
ValueError
|
If the shape of predictions doesn't match that of labels or
if the shape of weights is invalid. Also if labels or predictions
is None.
|
The loss_collection
argument is ignored when executing eagerly. Consider
holding on to the return value or collecting losses via a tf.keras.Model
.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.compat.v1.losses.mean_pairwise_squared_error\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/ops/losses/losses_impl.py#L513-L618) |\n\nAdds a pairwise-errors-squared loss to the training procedure. \n\n tf.compat.v1.losses.mean_pairwise_squared_error(\n labels,\n predictions,\n weights=1.0,\n scope=None,\n loss_collection=ops.GraphKeys.LOSSES\n )\n\nUnlike `mean_squared_error`, which is a measure of the differences between\ncorresponding elements of `predictions` and `labels`,\n`mean_pairwise_squared_error` is a measure of the differences between pairs of\ncorresponding elements of `predictions` and `labels`.\n\nFor example, if `labels`=\\[a, b, c\\] and `predictions`=\\[x, y, z\\], there are\nthree pairs of differences are summed to compute the loss:\nloss = \\[ ((a-b) - (x-y)).\\^2 + ((a-c) - (x-z)).\\^2 + ((b-c) - (y-z)).\\^2 \\] / 3\n\nNote that since the inputs are of shape `[batch_size, d0, ... dN]`, the\ncorresponding pairs are computed within each batch sample but not across\nsamples within a batch. For example, if `predictions` represents a batch of\n16 grayscale images of dimension \\[batch_size, 100, 200\\], then the set of pairs\nis drawn from each image, but not across images.\n\n`weights` acts as a coefficient for the loss. If a scalar is provided, then\nthe loss is simply scaled by the given value. If `weights` is a tensor of size\n`[batch_size]`, then the total loss for each sample of the batch is rescaled\nby the corresponding element in the `weights` vector.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `labels` | The ground truth output tensor, whose shape must match the shape of `predictions`. |\n| `predictions` | The predicted outputs, a tensor of size `[batch_size, d0, .. dN]` where N+1 is the total number of dimensions in `predictions`. |\n| `weights` | Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`. |\n| `scope` | The scope for the operations performed in computing the loss. |\n| `loss_collection` | collection to which the loss will be added. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A scalar `Tensor` that returns the weighted loss. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If the shape of `predictions` doesn't match that of `labels` or if the shape of `weights` is invalid. Also if `labels` or `predictions` is None. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\neager compatibility\n-------------------\n\n\u003cbr /\u003e\n\nThe `loss_collection` argument is ignored when executing eagerly. Consider\nholding on to the return value or collecting losses via a [`tf.keras.Model`](../../../../tf/keras/Model).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e"]]