Computes recall for sets of labels and predictions.
Inherits From: Recall
, Metric
tfma.metrics.SetMatchRecall(
thresholds: Optional[Union[float, List[float]]] = None,
top_k: Optional[int] = None,
name: Optional[str] = None,
prediction_class_key: str = 'classes',
prediction_score_key: str = 'scores',
class_key: Optional[str] = None,
weight_key: Optional[str] = None,
**kwargs
)
The metric deals with labels and predictions which are provided in the format
of sets (stored as variable length numpy arrays). The recall is the
micro averaged classification recall. The metric is suitable for the case
where the number of classes is large or the list of classes could not be
provided in advance.
Example:
Label: ['cats'],
Predictions: {'classes': ['cats, dogs']}
The recall is 1.
Args |
thresholds
|
(Optional) A float value or a python list/tuple of float
threshold values in [0, 1]. A threshold is compared with prediction
values to determine the truth value of predictions (i.e., above the
threshold is true , below is false ). One metric value is generated
for each threshold value. If neither thresholds nor top_k are set, the
default is to calculate precision with thresholds=0.5 .
|
top_k
|
(Optional) Used with a multi-class model to specify that the top-k
values should be used to compute the confusion matrix. The net effect is
that the non-top-k values are truncated and the matrix is then
constructed from the average TP, FP, TN, FN across the classes. When
top_k is used, metrics_specs.binarize settings must not be present. When
top_k is used, the default threshold is float('-inf'). In this case,
unmatched labels are still considered false negative, since they have
prediction with confidence score float('-inf'),
|
name
|
(Optional) string name of the metric instance.
|
prediction_class_key
|
the key name of the classes in prediction.
|
prediction_score_key
|
the key name of the scores in prediction.
|
class_key
|
(Optional) The key name of the classes in class-weight pairs.
If it is not provided, the classes are assumed to be the label classes.
|
weight_key
|
(Optional) The key name of the weights of classes in
class-weight pairs. The value in this key should be a numpy array of the
same length as the classes in class_key. The key should be stored under
the features key.
|
**kwargs
|
(Optional) Additional args to pass along to init (and eventually
on to _metric_computations and _metric_values). The args are passed to
the recall metric, the confusion matrix metric and binary classification
metric.
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.
result
View source
result(
tp: float, tn: float, fp: float, fn: float
) -> float
Function for computing metric value from TP, TN, FP, FN values.