View source on GitHub
|
Library containing helpers for adding post export metrics for evaluation.
These post export metrics can be included in the add_post_export_metrics parameter of Evaluate to compute them.
Functions
auc(...): This is the function that the user calls.
auc_plots(...): This is the function that the user calls.
calibration(...): This is the function that the user calls.
calibration_plot_and_prediction_histogram(...): This is the function that the user calls.
confusion_matrix_at_thresholds(...): This is the function that the user calls.
example_count(...): This is the function that the user calls.
example_weight(...): This is the function that the user calls.
fairness_auc(...): This is the function that the user calls.
fairness_indicators(...): This is the function that the user calls.
mean_absolute_error(...): This is the function that the user calls.
mean_squared_error(...): This is the function that the user calls.
precision_at_k(...): This is the function that the user calls.
recall_at_k(...): This is the function that the user calls.
root_mean_squared_error(...): This is the function that the user calls.
squared_pearson_correlation(...): This is the function that the user calls.
Other Members | |
|---|---|
| DEFAULT_KEY_PREFERENCE |
('logistic', 'predictions', 'probabilities', 'logits')
|
View source on GitHub