I have trained faster r-cnn model to predict multiple objects per image and classify them to two classes (binary classification). However occasionally the model completely misses some ground truths. I have yet to figure how should I approach those predictions. Should those be considered as false negatives?
Also I am using scikit to draw ROC curvas, to calculate AUC and to test different thresholds. I have been using positive class probabilities for y_scores. Is it good practice to assing totally missed ground truths as 0.0 probability to be positive? Somehow it feels like I am making them true negatives and thus skewing the results to seem better than what they really are.
But if not that then how? If I just discard the missed gt:s isn't it doing the same? If the model only finds one ground truth and that is correct the it would look like precision is 1.0 and recall is 1.0.
Any ideas how to approach these problems?