Recall, a commonly used performance metric for classification models, is the fraction of positives that are correctly classified:
Recall is also commonly referred to as “true positive rate,” “sensitivity,” and “hit rate.” For a detailed explanation of how recall is related to other performance metrics, see Part 1 and Part 2 of our blog series on enterprise AI metrics.
Recall places a high importance on reducing the number of false negatives, for example positive cases that are misclassified by the model as negatives. For that reason, it is important in mission-critical applications where a false negative could lead to loss of life or millions of dollars in damages. In such applications, it is essential to maximize recall.
In a trivial extreme, perfect recall can be achieved by classifying all cases as positive. That would ensure that there are no false negatives, but it could result in many false positives, negative cases that are misclassified by the model as positives. That is why recall is usually combined with other metrics, such as false positive rate and precision, to quantify the trade-off between false negatives and false positives.
Recall is included as a scoring metric in the C3 AI Application Platform. C3 AI Reliability is used in mission-critical applications, such as aircraft diagnostics, and places a high emphasis on recall because a false negative could lead to a failure that causes loss of life or damages millions of dollars in machinery and technology. BHC3 ReliabilityTM is a product built by the BakerHughesC3.ai partnership that detects and addresses failures in industrial processes. It places a high emphasis on recall because false negatives may cause failures that could compromise human safety, leaks that could have adverse environmental impacts, or machine breakages that could cost millions of dollars to address.