Cohen's kappa is a statistic
that measures inter-annotator agreement.
The cohen_kappa function calculates the confusion matrix, and creates three
local variables to compute the Cohen's kappa: po, pe_row, and pe_col,
which refer to the diagonal part, rows and columns totals of the confusion
matrix, respectively. This value is ultimately returned as kappa, an
idempotent operation that is calculated by
For estimation of the metric over a stream of data, the function creates an
update_op operation that updates these variables and returns the
kappa. update_op weights each prediction by the corresponding value in
weights.
Class labels are expected to start at 0. E.g., if num_classes
was three, then the possible labels would be [0, 1, 2].
If weights is None, weights default to 1. Use weights of 0 to mask values.
Args
labels
1-D Tensor of real labels for the classification task. Must be
one of the following types: int16, int32, int64.
predictions_idx
1-D Tensor of predicted class indices for a given
classification. Must have the same type as labels.
num_classes
The possible number of labels.
weights
Optional Tensor whose shape matches predictions.
metrics_collections
An optional list of collections that kappa should be
added to.
updates_collections
An optional list of collections that update_op should
be added to.
name
An optional variable_scope name.
Returns
kappa
Scalar float Tensor representing the current Cohen's kappa.
update_op
Operation that increments po, pe_row and pe_col
variables appropriately and whose value matches kappa.
Raises
ValueError
If num_classes is less than 2, or predictions and labels
have mismatched shapes, or if weights is not None and its shape
doesn't match predictions, or if either metrics_collections or
updates_collections are not a list or tuple.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.metrics.cohen_kappa\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/metrics/python/ops/metric_ops.py#L3789-L3921) |\n\nCalculates Cohen's kappa. \n\n tf.contrib.metrics.cohen_kappa(\n labels, predictions_idx, num_classes, weights=None, metrics_collections=None,\n updates_collections=None, name=None\n )\n\n[Cohen's kappa](https://en.wikipedia.org/wiki/Cohen's_kappa) is a statistic\nthat measures inter-annotator agreement.\n\nThe `cohen_kappa` function calculates the confusion matrix, and creates three\nlocal variables to compute the Cohen's kappa: `po`, `pe_row`, and `pe_col`,\nwhich refer to the diagonal part, rows and columns totals of the confusion\nmatrix, respectively. This value is ultimately returned as `kappa`, an\nidempotent operation that is calculated by \n\n pe = (pe_row * pe_col) / N\n k = (sum(po) - sum(pe)) / (N - sum(pe))\n\nFor estimation of the metric over a stream of data, the function creates an\n`update_op` operation that updates these variables and returns the\n`kappa`. `update_op` weights each prediction by the corresponding value in\n`weights`.\n\nClass labels are expected to start at 0. E.g., if `num_classes`\nwas three, then the possible labels would be \\[0, 1, 2\\].\n\nIf `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n| **Note:** Equivalent to `sklearn.metrics.cohen_kappa_score`, but the method doesn't support weighted matrix yet.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------|-------------------------------------------------------------------------------------------------------------------|\n| `labels` | 1-D `Tensor` of real labels for the classification task. Must be one of the following types: int16, int32, int64. |\n| `predictions_idx` | 1-D `Tensor` of predicted class indices for a given classification. Must have the same type as `labels`. |\n| `num_classes` | The possible number of labels. |\n| `weights` | Optional `Tensor` whose shape matches `predictions`. |\n| `metrics_collections` | An optional list of collections that `kappa` should be added to. |\n| `updates_collections` | An optional list of collections that `update_op` should be added to. |\n| `name` | An optional variable_scope name. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------|------------------------------------------------------------------------------------------------------------------|\n| `kappa` | Scalar float `Tensor` representing the current Cohen's kappa. |\n| `update_op` | `Operation` that increments `po`, `pe_row` and `pe_col` variables appropriately and whose value matches `kappa`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If `num_classes` is less than 2, or `predictions` and `labels` have mismatched shapes, or if `weights` is not `None` and its shape doesn't match `predictions`, or if either `metrics_collections` or `updates_collections` are not a list or tuple. |\n| `RuntimeError` | If eager execution is enabled. |\n\n\u003cbr /\u003e"]]