tf.contrib.kernel_methods.sparse_multiclass_hinge_loss
Stay organized with collections
Save and categorize content based on your preferences.
Adds Ops for computing the multiclass hinge loss.
tf.contrib.kernel_methods.sparse_multiclass_hinge_loss(
labels, logits, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES,
reduction=losses.Reduction.SUM_BY_NONZERO_WEIGHTS
)
The implementation is based on the following paper:
On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines
by Crammer and Singer.
link: http://jmlr.csail.mit.edu/papers/volume2/crammer01a/crammer01a.pdf
This is a generalization of standard (binary) hinge loss. For a given instance
with correct label c*, the loss is given by:
$$loss = max_{c != c*} logits_c - logits_{c*} + 1.$$
or equivalently
$$loss = max_c { logits_c - logits_{c*} + I_{c != c*} }$$
where \(I_{c != c*} = 1\ \text{if}\ c != c*\) and 0 otherwise.
Args |
labels
|
Tensor of shape [batch_size] or [batch_size, 1]. Corresponds to
the ground truth. Each entry must be an index in [0, num_classes) .
|
logits
|
Tensor of shape [batch_size, num_classes] corresponding to the
unscaled logits. Its dtype should be either float32 or float64 .
|
weights
|
Optional (python) scalar or Tensor . If a non-scalar Tensor , its
rank should be either 1 ([batch_size]) or 2 ([batch_size, 1]).
|
scope
|
The scope for the operations performed in computing the loss.
|
loss_collection
|
collection to which the loss will be added.
|
reduction
|
Type of reduction to apply to loss.
|
Returns |
Weighted loss float Tensor . If reduction is NONE , this has the same
shape as labels ; otherwise, it is a scalar.
|
Raises |
ValueError
|
If logits , labels or weights have invalid or inconsistent
shapes.
|
ValueError
|
If labels tensor has invalid dtype.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.kernel_methods.sparse_multiclass_hinge_loss\n\n|--------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/kernel_methods/python/losses.py#L30-L135) |\n\nAdds Ops for computing the multiclass hinge loss. \n\n tf.contrib.kernel_methods.sparse_multiclass_hinge_loss(\n labels, logits, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES,\n reduction=losses.Reduction.SUM_BY_NONZERO_WEIGHTS\n )\n\nThe implementation is based on the following paper:\nOn the Algorithmic Implementation of Multiclass Kernel-based Vector Machines\nby Crammer and Singer.\nlink: \u003chttp://jmlr.csail.mit.edu/papers/volume2/crammer01a/crammer01a.pdf\u003e\n\nThis is a generalization of standard (binary) hinge loss. For a given instance\nwith correct label c\\*, the loss is given by: \n$$loss = max_{c != c\\*} logits_c - logits_{c\\*} + 1.$$\n\nor equivalently \n$$loss = max_c { logits_c - logits_{c\\*} + I_{c != c\\*} }$$\n\nwhere \\\\(I_{c != c\\*} = 1\\\\ \\\\text{if}\\\\ c != c\\*\\\\) and 0 otherwise.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------|\n| `labels` | `Tensor` of shape \\[batch_size\\] or \\[batch_size, 1\\]. Corresponds to the ground truth. Each entry must be an index in `[0, num_classes)`. |\n| `logits` | `Tensor` of shape \\[batch_size, num_classes\\] corresponding to the unscaled logits. Its dtype should be either `float32` or `float64`. |\n| `weights` | Optional (python) scalar or `Tensor`. If a non-scalar `Tensor`, its rank should be either 1 (\\[batch_size\\]) or 2 (\\[batch_size, 1\\]). |\n| `scope` | The scope for the operations performed in computing the loss. |\n| `loss_collection` | collection to which the loss will be added. |\n| `reduction` | Type of reduction to apply to loss. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is a scalar. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------------------------------------|\n| `ValueError` | If `logits`, `labels` or `weights` have invalid or inconsistent shapes. |\n| `ValueError` | If `labels` tensor has invalid dtype. |\n\n\u003cbr /\u003e"]]