Software engineering


Figure 4-8. Accuracy, precision, and recall


Download 341.69 Kb.
bet18/21
Sana20.12.2022
Hajmi341.69 Kb.
#1035265
1   ...   13   14   15   16   17   18   19   20   21
Bog'liq
MASHINA-LEARNING2

Figure 4-8. Accuracy, precision, and recall
Accuracy
As shown in Figure 4-8, accuracy is the number of correct predictions made as a ratio of all predictions made. This is the most common evaluation metric for classification problems and is also the most misused. It is most suitable when there are an equal number of observations in each class (which is rarely the case) and when all predictions and the related prediction errors are equally important, which is often not the case.
Precision
Precision is the percentage of positive instances out of the total predicted positive instances. Here, the denominator is the model prediction done as positive from the whole given dataset. Precision is a good measure to determine when the cost of false positives is high (e.g., email spam detection).
Recall
Recall (or sensitivity or true positive rate) is the percentage of positive instances out of the total actual positive instances. Therefore, the denominator (true positive + false negative) is the actual number of positive instances present in the dataset. Recall is a good measure when there is a high cost associated with false negatives (e.g., fraud detection).
In addition to accuracy, precision, and recall, some of the other commonly used evaluation metrics for classification are discussed in the following sections.
Area under ROC curve
Area under ROC curve (AUC) is an evaluation metric for binary classification problems. ROC is a probability curve, and AUC represents degree or measure of separability. It tells how much the model is capable of distinguishing between classes. The higher the AUC, the better the model is at predicting zeros as zeros and ones as ones. An AUC of 0.5 means that the model has no class separation capacity whatsoever. The probabilistic interpretation of the AUC score is that if you randomly choose a positive case and a negative case, the probability that the positive case outranks the negative case according to the classifier is given by the AUC.
Confusion matrix
A confusion matrix lays out the performance of a learning algorithm. The confusion matrix is simply a square matrix that reports the counts of the true positive (TP), true negative (TN), false positive (FP), and false negative (FN) predictions of a classifier, as shown in Figure 4-9.
Predictive values
Positive (1) Negative (0)

Download 341.69 Kb.

Do'stlaringiz bilan baham:
1   ...   13   14   15   16   17   18   19   20   21




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling