tf.contrib.metrics.streaming_sparse_precision_at_top_k
Stay organized with collections
Save and categorize content based on your preferences.
Computes precision@k of top-k predictions with respect to sparse labels.
tf.contrib.metrics.streaming_sparse_precision_at_top_k(
top_k_predictions, labels, class_id=None, weights=None,
metrics_collections=None, updates_collections=None, name=None
)
If class_id
is not specified, we calculate precision as the ratio of
true positives (i.e., correct predictions, items in top_k_predictions
that are found in the corresponding row in labels
) to positives (all
top_k_predictions
).
If class_id
is specified, we calculate precision by considering only the
rows in the batch for which class_id
is in the top k
highest
predictions
, and computing the fraction of them for which class_id
is
in the corresponding row in labels
.
We expect precision to decrease as k
increases.
streaming_sparse_precision_at_top_k
creates two local variables,
true_positive_at_k
and false_positive_at_k
, that are used to compute
the precision@k frequency. This frequency is ultimately returned as
precision_at_k
: an idempotent operation that simply divides
true_positive_at_k
by total (true_positive_at_k
+ false_positive_at_k
).
For estimation of the metric over a stream of data, the function creates an
update_op
operation that updates these variables and returns the
precision_at_k
. Internally, set operations applied to top_k_predictions
and labels
calculate the true positives and false positives weighted by
weights
. Then update_op
increments true_positive_at_k
and
false_positive_at_k
using these values.
If weights
is None
, weights default to 1. Use weights of 0 to mask values.
Args |
top_k_predictions
|
Integer Tensor with shape [D1, ... DN, k] where N >= 1.
Commonly, N=1 and top_k_predictions has shape [batch size, k]. The final
dimension contains the indices of top-k labels. [D1, ... DN] must match
labels .
|
labels
|
int64 Tensor or SparseTensor with shape [D1, ... DN,
num_labels], where N >= 1 and num_labels is the number of target classes
for the associated prediction. Commonly, N=1 and labels has shape
[batch_size, num_labels]. [D1, ... DN] must match top_k_predictions .
Values should be in range [0, num_classes), where num_classes is the last
dimension of predictions . Values outside this range are ignored.
|
class_id
|
Integer class ID for which we want binary metrics. This should be
in range [0, num_classes), where num_classes is the last dimension of
predictions . If class_id is outside this range, the method returns
NAN.
|
weights
|
Tensor whose rank is either 0, or n-1, where n is the rank of
labels . If the latter, it must be broadcastable to labels (i.e., all
dimensions must be either 1 , or the same as the corresponding labels
dimension).
|
metrics_collections
|
An optional list of collections that values should be
added to.
|
updates_collections
|
An optional list of collections that updates should be
added to.
|
name
|
Name of new update operation, and namespace for other dependent ops.
|
Returns |
precision
|
Scalar float64 Tensor with the value of true_positives
divided by the sum of true_positives and false_positives .
|
update_op
|
Operation that increments true_positives and
false_positives variables appropriately, and whose value matches
precision .
|
Raises |
ValueError
|
If weights is not None and its shape doesn't match
predictions , or if either metrics_collections or updates_collections
are not a list or tuple.
|
ValueError
|
If top_k_predictions has rank < 2.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.metrics.streaming_sparse_precision_at_top_k\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/metrics/python/ops/metric_ops.py#L2345-L2428) |\n\nComputes precision@k of top-k predictions with respect to sparse labels. \n\n tf.contrib.metrics.streaming_sparse_precision_at_top_k(\n top_k_predictions, labels, class_id=None, weights=None,\n metrics_collections=None, updates_collections=None, name=None\n )\n\nIf `class_id` is not specified, we calculate precision as the ratio of\ntrue positives (i.e., correct predictions, items in `top_k_predictions`\nthat are found in the corresponding row in `labels`) to positives (all\n`top_k_predictions`).\nIf `class_id` is specified, we calculate precision by considering only the\nrows in the batch for which `class_id` is in the top `k` highest\n`predictions`, and computing the fraction of them for which `class_id` is\nin the corresponding row in `labels`.\n\nWe expect precision to decrease as `k` increases.\n\n`streaming_sparse_precision_at_top_k` creates two local variables,\n`true_positive_at_k` and `false_positive_at_k`, that are used to compute\nthe precision@k frequency. This frequency is ultimately returned as\n`precision_at_k`: an idempotent operation that simply divides\n`true_positive_at_k` by total (`true_positive_at_k` + `false_positive_at_k`).\n\nFor estimation of the metric over a stream of data, the function creates an\n`update_op` operation that updates these variables and returns the\n`precision_at_k`. Internally, set operations applied to `top_k_predictions`\nand `labels` calculate the true positives and false positives weighted by\n`weights`. Then `update_op` increments `true_positive_at_k` and\n`false_positive_at_k` using these values.\n\nIf `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `top_k_predictions` | Integer `Tensor` with shape \\[D1, ... DN, k\\] where N \\\u003e= 1. Commonly, N=1 and top_k_predictions has shape \\[batch size, k\\]. The final dimension contains the indices of top-k labels. \\[D1, ... DN\\] must match `labels`. |\n| `labels` | `int64` `Tensor` or `SparseTensor` with shape \\[D1, ... DN, num_labels\\], where N \\\u003e= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape \\[batch_size, num_labels\\]. \\[D1, ... DN\\] must match `top_k_predictions`. Values should be in range \\[0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored. |\n| `class_id` | Integer class ID for which we want binary metrics. This should be in range \\[0, num_classes), where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN. |\n| `weights` | `Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension). |\n| `metrics_collections` | An optional list of collections that values should be added to. |\n| `updates_collections` | An optional list of collections that updates should be added to. |\n| `name` | Name of new update operation, and namespace for other dependent ops. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------|----------------------------------------------------------------------------------------------------------------------------------|\n| `precision` | Scalar `float64` `Tensor` with the value of `true_positives` divided by the sum of `true_positives` and `false_positives`. |\n| `update_op` | `Operation` that increments `true_positives` and `false_positives` variables appropriately, and whose value matches `precision`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If `weights` is not `None` and its shape doesn't match `predictions`, or if either `metrics_collections` or `updates_collections` are not a list or tuple. |\n| `ValueError` | If `top_k_predictions` has rank \\\u003c 2. |\n\n\u003cbr /\u003e"]]