Cohen's kappa non realistic results, related to kernel_constraint

30 Views Asked by At

I am calculating Cohen's kappa for different models using the following line:

metrics=['accuracy', tfa.metrics.CohenKappa(num_classes=4, sparse_labels=False)]

For most of the models, the kappa value is less than the accuracy value, and that make sense; however, for two models the kappa value is much higher and approaches 1, which does not make sense.

I solved the problem by adding kernel_constraint=max_norm(norm_rate) to the output layer.

if norm_rate is 0.2 the kappa value is less than the accuracy value for the two models but the accuracy get worse.

if norm_rate is 2 the kappa value is higher than the accuracy value for the two models and the accuracy is not affected.

My question is, what is the relation between Cohen kappa and kernel_constraint, and is there a method to solve the problem without reducing accuracy.

Best Regards,

1

There are 1 best solutions below

0
On

Never mind, I found the problem. I was taking the validation accuracy (val_accuracy) while taking the training kappa (cohen_kappa), I was supposed to take the validation kappa (val_cohen_kappa)