This is like sigmoid_cross_entropy_with_logits() except that pos_weight,
allows one to trade off recall and precision by up- or down-weighting the
cost of a positive error relative to a negative error.
A value pos_weight > 1 decreases the false negative count, hence increasing
the recall.
Conversely setting pos_weight < 1 decreases the false positive count and
increases the precision.
This can be seen from the fact that pos_weight is introduced as a
multiplicative coefficient for the positive labels term
in the loss expression:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.compat.v1.nn.weighted_cross_entropy_with_logits\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/ops/nn_impl.py#L341-L403) |\n\nComputes a weighted cross entropy. (deprecated arguments) \n\n tf.compat.v1.nn.weighted_cross_entropy_with_logits(\n labels=None, logits=None, pos_weight=None, name=None, targets=None\n )\n\n| **Deprecated:** SOME ARGUMENTS ARE DEPRECATED: `(targets)`. They will be removed in a future version. Instructions for updating: targets is deprecated, use labels instead\n\nThis is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`,\nallows one to trade off recall and precision by up- or down-weighting the\ncost of a positive error relative to a negative error.\n\nThe usual cross-entropy cost is defined as: \n\n labels * -log(sigmoid(logits)) +\n (1 - labels) * -log(1 - sigmoid(logits))\n\nA value `pos_weight \u003e 1` decreases the false negative count, hence increasing\nthe recall.\nConversely setting `pos_weight \u003c 1` decreases the false positive count and\nincreases the precision.\nThis can be seen from the fact that `pos_weight` is introduced as a\nmultiplicative coefficient for the positive labels term\nin the loss expression: \n\n labels * -log(sigmoid(logits)) * pos_weight +\n (1 - labels) * -log(1 - sigmoid(logits))\n\nFor brevity, let `x = logits`, `z = labels`, `q = pos_weight`.\nThe loss is: \n\n qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))\n = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))\n = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))\n = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))\n = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x))\n = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))\n\nSetting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow,\nthe implementation uses \n\n (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))\n\n`logits` and `labels` must have the same type and shape.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------|----------------------------------------------------|\n| `labels` | A `Tensor` of the same type and shape as `logits`. |\n| `logits` | A `Tensor` of type `float32` or `float64`. |\n| `pos_weight` | A coefficient to use on the positive examples. |\n| `name` | A name for the operation (optional). |\n| `targets` | Deprecated alias for labels. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` of the same shape as `logits` with the componentwise weighted logistic losses. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|------------------------------------------------------|\n| `ValueError` | If `logits` and `labels` do not have the same shape. |\n\n\u003cbr /\u003e"]]