tf.keras.metrics.PrecisionAtRecall
Stay organized with collections
Save and categorize content based on your preferences.
Computes the precision at a given recall.
tf.keras.metrics.PrecisionAtRecall(
recall, num_thresholds=200, name=None, dtype=None
)
This metric creates four local variables, true_positives
, true_negatives
,
false_positives
and false_negatives
that are used to compute the
precision at the given recall. The threshold for the given recall
value is computed and used to evaluate the corresponding precision.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Usage:
m = tf.keras.metrics.PrecisionAtRecall(0.8, num_thresholds=1)
_ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])
m.result().numpy()
1.0
m.reset_states()
_ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],
sample_weight=[1, 0, 0, 1])
m.result().numpy()
1.0
Usage with tf.keras API:
model = tf.keras.Model(inputs, outputs)
model.compile(
'sgd',
loss='mse',
metrics=[tf.keras.metrics.PrecisionAtRecall(recall=0.8)])
Args |
recall
|
A scalar value in range [0, 1] .
|
num_thresholds
|
(Optional) Defaults to 200. The number of thresholds to
use for matching the given recall.
|
name
|
(Optional) string name of the metric instance.
|
dtype
|
(Optional) data type of the metric result.
|
Methods
reset_states
View source
reset_states()
Resets all of the metric state variables.
This function is called between epochs/steps,
when a metric is evaluated during training.
result
View source
result()
Computes and returns the metric value tensor.
Result computation is an idempotent operation that simply calculates the
metric value using the state variables.
update_state
View source
update_state(
y_true, y_pred, sample_weight=None
)
Accumulates confusion matrix statistics.
Args |
y_true
|
The ground truth values.
|
y_pred
|
The predicted values.
|
sample_weight
|
Optional weighting of each example. Defaults to 1. Can be a
Tensor whose rank is either 0, or the same rank as y_true , and must
be broadcastable to y_true .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.metrics.PrecisionAtRecall\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/metrics.py#L1656-L1730) |\n\nComputes the precision at a given recall.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.metrics.PrecisionAtRecall`](/api_docs/python/tf/keras/metrics/PrecisionAtRecall)\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.metrics.PrecisionAtRecall`](/api_docs/python/tf/keras/metrics/PrecisionAtRecall)\n\n\u003cbr /\u003e\n\n tf.keras.metrics.PrecisionAtRecall(\n recall, num_thresholds=200, name=None, dtype=None\n )\n\nThis metric creates four local variables, `true_positives`, `true_negatives`,\n`false_positives` and `false_negatives` that are used to compute the\nprecision at the given recall. The threshold for the given recall\nvalue is computed and used to evaluate the corresponding precision.\n\nIf `sample_weight` is `None`, weights default to 1.\nUse `sample_weight` of 0 to mask values.\n\n#### Usage:\n\n m = tf.keras.metrics.PrecisionAtRecall(0.8, num_thresholds=1)\n _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])\n m.result().numpy()\n 1.0\n\n m.reset_states()\n _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],\n sample_weight=[1, 0, 0, 1])\n m.result().numpy()\n 1.0\n\nUsage with tf.keras API: \n\n model = tf.keras.Model(inputs, outputs)\n model.compile(\n 'sgd',\n loss='mse',\n metrics=[tf.keras.metrics.PrecisionAtRecall(recall=0.8)])\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------|--------------------------------------------------------------------------------------------|\n| `recall` | A scalar value in range `[0, 1]`. |\n| `num_thresholds` | (Optional) Defaults to 200. The number of thresholds to use for matching the given recall. |\n| `name` | (Optional) string name of the metric instance. |\n| `dtype` | (Optional) data type of the metric result. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `reset_states`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/metrics.py#L1477-L1480) \n\n reset_states()\n\nResets all of the metric state variables.\n\nThis function is called between epochs/steps,\nwhen a metric is evaluated during training.\n\n### `result`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/metrics.py#L1711-L1725) \n\n result()\n\nComputes and returns the metric value tensor.\n\nResult computation is an idempotent operation that simply calculates the\nmetric value using the state variables.\n\n### `update_state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/metrics.py#L1452-L1475) \n\n update_state(\n y_true, y_pred, sample_weight=None\n )\n\nAccumulates confusion matrix statistics.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `y_true` | The ground truth values. |\n| `y_pred` | The predicted values. |\n| `sample_weight` | Optional weighting of each example. Defaults to 1. Can be a `Tensor` whose rank is either 0, or the same rank as `y_true`, and must be broadcastable to `y_true`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Update op. ||\n\n\u003cbr /\u003e"]]