tf.keras.metrics.FalsePositives
Stay organized with collections
Save and categorize content based on your preferences.
Calculates the number of false positives.
Inherits From: Metric
tf.keras.metrics.FalsePositives(
thresholds=None, name=None, dtype=None
)
Used in the notebooks
If sample_weight
is given, calculates the sum of the weights of
false positives. This metric creates one local variable, accumulator
that is used to keep track of the number of false positives.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Args |
thresholds
|
(Optional) Defaults to 0.5 . A float value, or a Python
list/tuple of float threshold values in [0, 1] . A threshold is
compared with prediction values to determine the truth value of
predictions (i.e., above the threshold is True , below is False ).
If used with a loss function that sets from_logits=True (i.e. no
sigmoid applied to predictions), thresholds should be set to 0.
One metric value is generated for each threshold value.
|
name
|
(Optional) string name of the metric instance.
|
dtype
|
(Optional) data type of the metric result.
|
Examples:
m = keras.metrics.FalsePositives()
m.update_state([0, 1, 0, 0], [0, 0, 1, 1])
m.result()
2.0
m.reset_state()
m.update_state([0, 1, 0, 0], [0, 0, 1, 1], sample_weight=[0, 0, 1, 0])
m.result()
1.0
Attributes |
dtype
|
|
variables
|
|
Methods
add_variable
View source
add_variable(
shape, initializer, dtype=None, aggregation='sum', name=None
)
add_weight
View source
add_weight(
shape=(), initializer=None, dtype=None, name=None
)
from_config
View source
@classmethod
from_config(
config
)
get_config
View source
get_config()
Return the serializable config of the metric.
reset_state
View source
reset_state()
Reset all of the metric state variables.
This function is called between epochs/steps,
when a metric is evaluated during training.
result
View source
result()
Compute the current metric value.
Returns |
A scalar tensor, or a dictionary of scalar tensors.
|
stateless_reset_state
View source
stateless_reset_state()
stateless_result
View source
stateless_result(
metric_variables
)
stateless_update_state
View source
stateless_update_state(
metric_variables, *args, **kwargs
)
update_state
View source
update_state(
y_true, y_pred, sample_weight=None
)
Accumulates the metric statistics.
Args |
y_true
|
The ground truth values.
|
y_pred
|
The predicted values.
|
sample_weight
|
Optional weighting of each example. Defaults to 1 .
Can be a tensor whose rank is either 0, or the same rank as
y_true , and must be broadcastable to y_true .
|
__call__
View source
__call__(
*args, **kwargs
)
Call self as a function.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.metrics.FalsePositives\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/confusion_metrics.py#L78-L119) |\n\nCalculates the number of false positives.\n\nInherits From: [`Metric`](../../../tf/keras/Metric) \n\n tf.keras.metrics.FalsePositives(\n thresholds=None, name=None, dtype=None\n )\n\n### Used in the notebooks\n\n| Used in the tutorials |\n|-------------------------------------------------------------------------------------------------------------|\n| - [Classification on imbalanced data](https://www.tensorflow.org/tutorials/structured_data/imbalanced_data) |\n\nIf `sample_weight` is given, calculates the sum of the weights of\nfalse positives. This metric creates one local variable, `accumulator`\nthat is used to keep track of the number of false positives.\n\nIf `sample_weight` is `None`, weights default to 1.\nUse `sample_weight` of 0 to mask values.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `thresholds` | (Optional) Defaults to `0.5`. A float value, or a Python list/tuple of float threshold values in `[0, 1]`. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is `True`, below is `False`). If used with a loss function that sets `from_logits=True` (i.e. no sigmoid applied to predictions), `thresholds` should be set to 0. One metric value is generated for each threshold value. |\n| `name` | (Optional) string name of the metric instance. |\n| `dtype` | (Optional) data type of the metric result. |\n\n\u003cbr /\u003e\n\n#### Examples:\n\n m = keras.metrics.FalsePositives()\n m.update_state([0, 1, 0, 0], [0, 0, 1, 1])\n m.result()\n 2.0\n\n m.reset_state()\n m.update_state([0, 1, 0, 0], [0, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n m.result()\n 1.0\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------|---------------|\n| `dtype` | \u003cbr /\u003e \u003cbr /\u003e |\n| `variables` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `add_variable`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L186-L202) \n\n add_variable(\n shape, initializer, dtype=None, aggregation='sum', name=None\n )\n\n### `add_weight`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L204-L208) \n\n add_weight(\n shape=(), initializer=None, dtype=None, name=None\n )\n\n### `from_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L226-L228) \n\n @classmethod\n from_config(\n config\n )\n\n### `get_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/confusion_metrics.py#L72-L75) \n\n get_config()\n\nReturn the serializable config of the metric.\n\n### `reset_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L102-L109) \n\n reset_state()\n\nReset all of the metric state variables.\n\nThis function is called between epochs/steps,\nwhen a metric is evaluated during training.\n\n### `result`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/confusion_metrics.py#L65-L70) \n\n result()\n\nCompute the current metric value.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A scalar tensor, or a dictionary of scalar tensors. ||\n\n\u003cbr /\u003e\n\n### `stateless_reset_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L164-L177) \n\n stateless_reset_state()\n\n### `stateless_result`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L148-L162) \n\n stateless_result(\n metric_variables\n )\n\n### `stateless_update_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L115-L138) \n\n stateless_update_state(\n metric_variables, *args, **kwargs\n )\n\n### `update_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/confusion_metrics.py#L46-L63) \n\n update_state(\n y_true, y_pred, sample_weight=None\n )\n\nAccumulates the metric statistics.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `y_true` | The ground truth values. |\n| `y_pred` | The predicted values. |\n| `sample_weight` | Optional weighting of each example. Defaults to `1`. Can be a tensor whose rank is either 0, or the same rank as `y_true`, and must be broadcastable to `y_true`. |\n\n\u003cbr /\u003e\n\n### `__call__`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L217-L220) \n\n __call__(\n *args, **kwargs\n )\n\nCall self as a function."]]