DeepLabv3+ for semantic segmentation: dice loss stuck

21 Views Asked by At

I'm trying to fine tune a DeepLabV3+ (2D) net pretrained with the Pascal Voc dataset. The aim is to segment lung tumors, chest and lungs. Then I have,

4 labels: background, chest, lungs, tumor.

Since I would like to use the entire medical image without cropping to the tumor, I have an imbalanced dataset. The number of background, chest and lungs pixels is way greater than the number of tumor pixels. This scenario can be mitigated augmenting the images with tumor, but it leads always to an imbalanced dataset

The issue is that the generalized dice loss function is stuck at 0.9 (stuck in local minima?), while the accuracy is increasing.

Before, I used weighted binary cross entropy as loss function, that decreased and led to high accuracy and dice for background, chest, lungs, while high accuracy for tumors (87%) and low dice (0.47). Then I switched to generalized dice loss, since I wanted to optimize it, and I expected it to decrease. In my opinion the issue is related to class imbalance, but I don't know how to solve it.

I tried the Adam optimizer with dice loss, with learning rate from 1e^-2 to 1e^-6. I used the 10 as mini batch size, that is the maximum that my gpu can handle. The net has a softmax layer as last activation layer.

0

There are 0 best solutions below