Confusion matrix at thresholds.
Inherits From: Metric
tfma.metrics.COCOAveragePrecision(
num_thresholds: Optional[int] = None,
iou_threshold: Optional[float] = None,
class_id: Optional[int] = None,
class_weight: Optional[float] = None,
area_range: Optional[Tuple[float, float]] = None,
max_num_detections: Optional[int] = None,
recalls: Optional[List[float]] = None,
num_recalls: Optional[int] = None,
name: Optional[str] = None,
labels_to_stack: Optional[List[str]] = None,
predictions_to_stack: Optional[List[str]] = None,
num_detections_key: Optional[str] = None,
allow_missing_key: bool = False
)
It computes the average precision of object detections for a single class and
a single iou_threshold.
Args |
num_thresholds
|
(Optional) Number of thresholds to use for calculating the
matrices and finding the precision at given recall.
|
iou_threshold
|
(Optional) Threholds for a detection and ground truth pair
with specific iou to be considered as a match.
|
class_id
|
(Optional) The class id for calculating metrics.
|
class_weight
|
(Optional) The weight associated with the object class id.
|
area_range
|
(Optional) The area-range for objects to be considered for
metrics.
|
max_num_detections
|
(Optional) The maximum number of detections for a
single image.
|
recalls
|
(Optional) recalls at which precisions will be calculated.
|
num_recalls
|
(Optional) Used for objecth detection, the number of recalls
for calculating average precision, it equally generates points bewteen 0
and 1. (Only one of recalls and num_recalls should be used).
|
name
|
(Optional) string name of the metric instance.
|
labels_to_stack
|
(Optional) Keys for columns to be stacked as a single
numpy array as the labels. It is searched under the key labels, features
and transformed features. The desired format is [left bounadary, top
boudnary, right boundary, bottom boundary, class id]. e.g. ['xmin',
'ymin', 'xmax', 'ymax', 'class_id']
|
predictions_to_stack
|
(Optional) Output names for columns to be stacked as
a single numpy array as the prediction. It should be the model's output
names. The desired format is [left bounadary, top boudnary, right
boundary, bottom boundary, class id, confidence score]. e.g. ['xmin',
'ymin', 'xmax', 'ymax', 'class_id', 'scores']
|
num_detections_key
|
(Optional) An output name in which to find the number
of detections to use for evaluation for a given example. It does nothing
if predictions_to_stack is not set. The value for this output should be
a scalar value or a single-value tensor. The stacked predicitions will
be truncated with the specified number of detections.
|
allow_missing_key
|
(Optional) If true, the preprocessor will return empty
array instead of raising errors.
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.