Unlike mean_squared_error, which is a measure of the differences between
corresponding elements of predictions and labels,
mean_pairwise_squared_error is a measure of the differences between pairs of
corresponding elements of predictions and labels.
For example, if labels=[a, b, c] and predictions=[x, y, z], there are
three pairs of differences are summed to compute the loss:
loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3
Note that since the inputs are of size [batch_size, d0, ... dN], the
corresponding pairs are computed within each batch sample but not across
samples within a batch. For example, if predictions represents a batch of
16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs
is drawn from each image, but not across images.
weights acts as a coefficient for the loss. If a scalar is provided, then
the loss is simply scaled by the given value. If weights is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the weights vector.
Args
predictions
The predicted outputs, a tensor of size [batch_size, d0, .. dN]
where N+1 is the total number of dimensions in predictions.
labels
The ground truth output tensor, whose shape must match the shape of
the predictions tensor.
weights
Coefficients for the loss a scalar, a tensor of shape [batch_size]
or a tensor whose shape matches predictions.
scope
The scope for the operations performed in computing the loss.
Returns
A scalar Tensor representing the loss value.
Raises
ValueError
If the shape of predictions doesn't match that of labels or
if the shape of weights is invalid.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.losses.mean_pairwise_squared_error\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/losses/python/losses/loss_ops.py#L521-L604) |\n\nAdds a pairwise-errors-squared loss to the training procedure. (deprecated) \n\n tf.contrib.losses.mean_pairwise_squared_error(\n predictions, labels=None, weights=1.0, scope=None\n )\n\n| **Warning:** THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30. Instructions for updating: Use tf.losses.mean_pairwise_squared_error instead. Note that the order of the predictions and labels arguments has been changed.\n\nUnlike `mean_squared_error`, which is a measure of the differences between\ncorresponding elements of `predictions` and `labels`,\n`mean_pairwise_squared_error` is a measure of the differences between pairs of\ncorresponding elements of `predictions` and `labels`.\n\nFor example, if `labels`=\\[a, b, c\\] and `predictions`=\\[x, y, z\\], there are\nthree pairs of differences are summed to compute the loss:\nloss = \\[ ((a-b) - (x-y)).\\^2 + ((a-c) - (x-z)).\\^2 + ((b-c) - (y-z)).\\^2 \\] / 3\n\nNote that since the inputs are of size \\[batch_size, d0, ... dN\\], the\ncorresponding pairs are computed within each batch sample but not across\nsamples within a batch. For example, if `predictions` represents a batch of\n16 grayscale images of dimension \\[batch_size, 100, 200\\], then the set of pairs\nis drawn from each image, but not across images.\n\n`weights` acts as a coefficient for the loss. If a scalar is provided, then\nthe loss is simply scaled by the given value. If `weights` is a tensor of size\n\\[batch_size\\], then the total loss for each sample of the batch is rescaled\nby the corresponding element in the `weights` vector.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `predictions` | The predicted outputs, a tensor of size \\[batch_size, d0, .. dN\\] where N+1 is the total number of dimensions in `predictions`. |\n| `labels` | The ground truth output tensor, whose shape must match the shape of the `predictions` tensor. |\n| `weights` | Coefficients for the loss a scalar, a tensor of shape \\[batch_size\\] or a tensor whose shape matches `predictions`. |\n| `scope` | The scope for the operations performed in computing the loss. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A scalar `Tensor` representing the loss value. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------------------------------------------------------------------|\n| `ValueError` | If the shape of `predictions` doesn't match that of `labels` or if the shape of `weights` is invalid. |\n\n\u003cbr /\u003e"]]