📄️ Accuracy
Understanding the most common evaluation metric, its formula, and its fatal flaws in imbalanced datasets.
📄️ Precision
Understanding Precision, its mathematical foundation, and why it is vital for minimizing False Positives.
📄️ Recall
Understanding Recall, its mathematical definition, and why it is critical for minimizing False Negatives.
📄️ F1-Score
Mastering the harmonic mean of Precision and Recall to evaluate models on imbalanced datasets.
📄️ ROC & AUC
Evaluating classifier performance across all thresholds using the Receiver Operating Characteristic and Area Under the Curve.
📄️ Log Loss
Understanding cross-entropy loss and why it is the gold standard for evaluating probability-based classifiers.
📄️ Confusion Matrix
The foundation of classification evaluation: True Positives, False Positives, True Negatives, and False Negatives.