F.mse_loss & nn.MSELoss return different MSE value for each test, if batch size is over 1

529 Views Asked by At

I'm using the below loss function that combines mean squared loss and cross entropy loss.

class Custom_Loss(nn.Module):
    def __init__(self, device='cpu'):
        super(Custom_Loss, self).__init__()
        self.device = device

    def forward(self, pred_probs, true_class, pred_points, true_points):
        """
            pred_probs: [B x n_classes]
            true_classes: [B x 1]
            pred_points: [B x 1]
            pred_points: [B x 1]
        """
        # MSE = nn.MSELoss()
        # CE = nn.CrossEntropyLoss()
        batch_size = pred_probs.size(0)
        mse = F.mse_loss(pred_points, true_points, reduction='sum').to(self.device) / batch_size
        ce = F.cross_entropy(pred.unsqueeze(0), true_class[index].unsqueeze(0))

        loss = mse + ce

        return loss, mse, ce

While I use this loss function to test the already tested model, it returns different mse value for each test if batch size is over 1. If the batch size is 1, it returns same mse value every test.

My questions are these:

  • Is there any offset or normalization in pytorch MSE loss function to calculate the MSE value in the batch?
  • I think that solution is to set the batch size as 1 for validation and test. But is there any other solution for this problem?

While testing the model, If I set the batch size as 1, I always have same MSE value as a result. But, if I set the batch size over 1, I alway have different MSE value for each test. So, I think there is some special offset or normalization process in the loss function.

0

There are 0 best solutions below