I want to use the AutoGluon library to find the best model. My problem is a classification problem with 8 classes. I have weights for the correct predictions of the classes, and I want to find the F1 scores multiplied by these weights to determine the best model. How can I set the evaluation metric? For example, for a model, if the F1 scores are [0.2, 0.3, 0., 0.5, 0.2, 0.5, 0., 0.1], and the weights are as follows:
# Define the custom evaluation function
def weighted_f1_score(y_true, y_pred, weights):
# Calculate the F1 score for each class
f1_scores = f1_score(y_true, y_pred, average=None)
# Multiply each F1 score by the corresponding weight
weighted_f1_scores = f1_scores * weights
# Calculate the total weighted F1 score
total_weighted_f1_score = np.sum(weighted_f1_scores)
return total_weighted_f1_score
# Define the weights for each class
weights = {
0: 0.0385,
1: 0.0328,
2: 0.2791,
3: 0.1812,
4: 0.0113,
5: 0.2952,
6: 0.1614,
7: 0.0001
}
# Define the custom evaluation function outside of the fit call
custom_eval_metric = lambda y_true, y_pred: weighted_f1_score(y_true, y_pred, weights)
# Train the models using AutoGluon and specify the custom evaluation function
predictor = TabularPredictor(label="LABEL", eval_metric=custom_eval_metric)
predictor.fit(balanced_train_df, time_limit=500)
How to customize the evaluation metric in AutoGluon, How to define your own evaluation function and pass it as the eval_metric parameter when training the model?
How to customize the evaluation metric in AutoGluon, How to define your own evaluation function and pass it as the eval_metric parameter when training the model?