About evaluation metrics
08-evaluationMetrics.Rmd
Root Mean Square Error (RMSE)
RMSE measures the average deviation between predicted values and actual values. It is calculated as follows:
\[ \text{RMSE} = \sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{y}_i)^2} \]
R-squared (R²)
R² measures the proportion of the variance in the dependent variable that is predictable from the independent variable(s). It is calculated as follows:
\[ R^2 = 1 - \frac{\sum_{i=1}^{n}(y_i - \hat{y}_i)^2}{\sum_{i=1}^{n}(y_i - \bar{y})^2} \]
Precision-Recall Area Under Curve (PR-AUC)
PR-AUC measures the area under the precision-recall curve. It is often used in binary classification tasks where the class distribution is imbalanced.
Performance Ratio
performance ratio is calculated as the ratio of PR-AUC to the PR-AUC
of a random classifier (pr_randm_AUC
). It provides a
measure of how well the model performs compared to a random
baseline.
\[ performanceRatio = \frac{PR_{AUC}}{PR_{randomAUC}} \]
Receiver Operating Characteristic Area Under Curve (ROC AUC)
ROC AUC measures the area under the receiver operating characteristic curve. It evaluates the classifier’s ability to distinguish between classes.
Accuracy
Accuracy measures the proportion of correct predictions out of the total predictions made by the model. It is calculated as follows:
\[ \text{Accuracy} = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}} \]
Specificity
Specificity measures the proportion of true negatives out of all actual negatives. It is calculated as follows:
\[ \text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}} \]