tfma.metrics.NDCG
Stay organized with collections
Save and categorize content based on your preferences.
NDCG (normalized discounted cumulative gain) metric.
Inherits From: Metric
tfma.metrics.NDCG(
gain_key: str,
top_k_list: Optional[List[int]] = None,
name: str = NDCG_NAME
)
Calculates NDCG@k for a given set of top_k values calculated from a list of
gains (relevance scores) that are sorted based on the associated predictions.
The top_k_list can be passed as part of the NDCG metric config or using
tfma.MetricsSpec.binarize.top_k_list if configuring multiple top_k metrics.
The gain (relevance score) is determined from the value stored in the
'gain_key' feature. The value of NDCG@k returned is a weighted average of
NDCG@k over the set of queries using the example weights.
NDCG@k = (DCG@k for the given rank)/(DCG@k
DCG@k = sum_{i=1}^k gain_i/log_2(i+1), where gain_i is the gain (relevance
score) of the i^th ranked response, indexed from 1.
This is a query/ranking based metric so a query_key must also be provided in
the associated tfma.MetricsSpec.
Args |
gain_key
|
Key of feature in features dictionary that holds gain values.
|
top_k_list
|
Values for top k. This can also be set using the
tfma.MetricsSpec.binarize.top_k_list associated with the metric.
|
name
|
Metric name.
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tfma.metrics.NDCG\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/ndcg.py#L28-L61) |\n\nNDCG (normalized discounted cumulative gain) metric.\n\nInherits From: [`Metric`](../../tfma/metrics/Metric) \n\n tfma.metrics.NDCG(\n gain_key: str,\n top_k_list: Optional[List[int]] = None,\n name: str = NDCG_NAME\n )\n\nCalculates NDCG@k for a given set of top_k values calculated from a list of\ngains (relevance scores) that are sorted based on the associated predictions.\nThe top_k_list can be passed as part of the NDCG metric config or using\ntfma.MetricsSpec.binarize.top_k_list if configuring multiple top_k metrics.\nThe gain (relevance score) is determined from the value stored in the\n'gain_key' feature. The value of NDCG@k returned is a weighted average of\nNDCG@k over the set of queries using the example weights.\n\nNDCG@k = (DCG@k for the given rank)/(DCG@k\nDCG@k = sum_{i=1}\\^k gain_i/log_2(i+1), where gain_i is the gain (relevance\nscore) of the i\\^th ranked response, indexed from 1.\n\nThis is a query/ranking based metric so a query_key must also be provided in\nthe associated tfma.MetricsSpec.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------|-------------------------------------------------------------------------------------------------------------------|\n| `gain_key` | Key of feature in features dictionary that holds gain values. |\n| `top_k_list` | Values for top k. This can also be set using the tfma.MetricsSpec.binarize.top_k_list associated with the metric. |\n| `name` | Metric name. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `compute_confidence_interval` | Whether to compute confidence intervals for this metric. \u003cbr /\u003e Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `computations`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L862-L888) \n\n computations(\n eval_config: Optional[../../tfma/EvalConfig] = None,\n schema: Optional[schema_pb2.Schema] = None,\n model_names: Optional[List[str]] = None,\n output_names: Optional[List[str]] = None,\n sub_keys: Optional[List[Optional[SubKey]]] = None,\n aggregation_type: Optional[AggregationType] = None,\n class_weights: Optional[Dict[int, float]] = None,\n example_weighted: bool = False,\n query_key: Optional[str] = None\n ) -\u003e ../../tfma/metrics/MetricComputations\n\nCreates computations associated with metric.\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L842-L847) \n\n @classmethod\n from_config(\n config: Dict[str, Any]\n ) -\u003e 'Metric'\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L838-L840) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns serializable config."]]