You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Building a machine learning model is only half the job — you also need to evaluate how well it performs and validate that it will generalise to new, unseen data. Choosing the right evaluation metrics and validation strategies is critical to building reliable ML systems.
A model that appears to perform well on training data may fail completely on new data. Proper evaluation answers these questions:
The confusion matrix is the foundation of classification evaluation. For a binary classifier:
| Predicted Positive | Predicted Negative | |
|---|---|---|
| Actual Positive | True Positive (TP) | False Negative (FN) |
| Actual Negative | False Positive (FP) | True Negative (TN) |
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
cm = confusion_matrix(y_test, y_pred)
disp = ConfusionMatrixDisplay(cm, display_labels=['Negative', 'Positive'])
disp.plot(cmap='Blues')
plt.title('Confusion Matrix')
plt.show()
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.