tf.keras.metrics.MeanIoU
Stay organized with collections
Save and categorize content based on your preferences.
Computes the mean Intersection-Over-Union metric.
Inherits From: Metric
tf.keras.metrics.MeanIoU(
num_classes, name=None, dtype=None
)
Mean Intersection-Over-Union is a common evaluation metric for semantic image
segmentation, which first computes the IOU for each semantic class and then
computes the average over classes. IOU is defined as follows:
IOU = true_positive / (true_positive + false_positive + false_negative).
The predictions are accumulated in a confusion matrix, weighted by
sample_weight
and the metric is then calculated from it.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Usage:
m = tf.keras.metrics.MeanIoU(num_classes=2)
m.update_state([0, 0, 1, 1], [0, 1, 0, 1])
# cm = [[1, 1],
[1, 1]]
# sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]
# iou = true_positives / (sum_row + sum_col - true_positives))
# result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33
print('Final result: ', m.result().numpy()) # Final result: 0.33
Usage with tf.keras API:
model = tf.keras.Model(inputs, outputs)
model.compile(
'sgd',
loss='mse',
metrics=[tf.keras.metrics.MeanIoU(num_classes=2)])
Args |
num_classes
|
The possible number of labels the prediction task can have.
This value must be provided, since a confusion matrix of dimension =
[num_classes, num_classes] will be allocated.
|
name
|
(Optional) string name of the metric instance.
|
dtype
|
(Optional) data type of the metric result.
|
Methods
reset_states
View source
reset_states()
Resets all of the metric state variables.
This function is called between epochs/steps,
when a metric is evaluated during training.
result
View source
result()
Compute the mean intersection-over-union via the confusion matrix.
update_state
View source
update_state(
y_true, y_pred, sample_weight=None
)
Accumulates the confusion matrix statistics.
Args |
y_true
|
The ground truth values.
|
y_pred
|
The predicted values.
|
sample_weight
|
Optional weighting of each example. Defaults to 1. Can be a
Tensor whose rank is either 0, or the same rank as y_true , and must
be broadcastable to y_true .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.metrics.MeanIoU\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/metrics/MeanIoU) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/keras/metrics.py#L2253-L2378) |\n\nComputes the mean Intersection-Over-Union metric.\n\nInherits From: [`Metric`](../../../tf/keras/metrics/Metric)\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.metrics.MeanIoU`](/api_docs/python/tf/keras/metrics/MeanIoU)\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.metrics.MeanIoU`](/api_docs/python/tf/keras/metrics/MeanIoU)\n\n\u003cbr /\u003e\n\n tf.keras.metrics.MeanIoU(\n num_classes, name=None, dtype=None\n )\n\nMean Intersection-Over-Union is a common evaluation metric for semantic image\nsegmentation, which first computes the IOU for each semantic class and then\ncomputes the average over classes. IOU is defined as follows:\nIOU = true_positive / (true_positive + false_positive + false_negative).\nThe predictions are accumulated in a confusion matrix, weighted by\n`sample_weight` and the metric is then calculated from it.\n\nIf `sample_weight` is `None`, weights default to 1.\nUse `sample_weight` of 0 to mask values.\n\n#### Usage:\n\n m = tf.keras.metrics.MeanIoU(num_classes=2)\n m.update_state([0, 0, 1, 1], [0, 1, 0, 1])\n\n # cm = [[1, 1],\n [1, 1]]\n # sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]\n # iou = true_positives / (sum_row + sum_col - true_positives))\n # result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33\n print('Final result: ', m.result().numpy()) # Final result: 0.33\n\nUsage with tf.keras API: \n\n model = tf.keras.Model(inputs, outputs)\n model.compile(\n 'sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanIoU(num_classes=2)])\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `num_classes` | The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = \\[num_classes, num_classes\\] will be allocated. |\n| `name` | (Optional) string name of the metric instance. |\n| `dtype` | (Optional) data type of the metric result. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `reset_states`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/keras/metrics.py#L2372-L2373) \n\n reset_states()\n\nResets all of the metric state variables.\n\nThis function is called between epochs/steps,\nwhen a metric is evaluated during training.\n\n### `result`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/keras/metrics.py#L2348-L2370) \n\n result()\n\nCompute the mean intersection-over-union via the confusion matrix.\n\n### `update_state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/keras/metrics.py#L2312-L2346) \n\n update_state(\n y_true, y_pred, sample_weight=None\n )\n\nAccumulates the confusion matrix statistics.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `y_true` | The ground truth values. |\n| `y_pred` | The predicted values. |\n| `sample_weight` | Optional weighting of each example. Defaults to 1. Can be a `Tensor` whose rank is either 0, or the same rank as `y_true`, and must be broadcastable to `y_true`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Update op. ||\n\n\u003cbr /\u003e"]]