I did a few modifications in mrcnn/model.py in an effort to harmonize it with TF2, and most notably in function compile():

def compile(self, learning_rate, momentum):
        """Gets the model ready for training. Adds losses, regularization, and
        metrics. Then calls the Keras compile() function.
        """
        self.keras_model.metrics_tensors = []
        # Optimizer object
        optimizer = keras.optimizers.SGD(
            lr=learning_rate, momentum=momentum,
            clipnorm=self.config.GRADIENT_CLIP_NORM)
        # Add Losses
        # First, clear previously set losses to avoid duplication
        self.keras_model._losses = []
        self.keras_model._per_input_losses = {}
        loss_names = [
            "rpn_class_loss",  "rpn_bbox_loss",
            "mrcnn_class_loss", "mrcnn_bbox_loss", "mrcnn_mask_loss"]

        for name in loss_names:
            layer = self.keras_model.get_layer(name)
            #if layer.output in self.keras_model.losses: # is this conflicting?
            #    continue
            loss = tf.reduce_mean(layer.output, keepdims=True) * self.config.LOSS_WEIGHTS.get(name, 1.)
            self.keras_model.add_loss(loss)

        # Add L2 Regularization
        # Skip gamma and beta weights of batch normalization layers.
        reg_losses = [
            keras.regularizers.l2(self.config.WEIGHT_DECAY)(w) / tf.cast(tf.size(w), tf.float32)
            for w in self.keras_model.trainable_weights
            if 'gamma' not in w.name and 'beta' not in w.name]
        self.keras_model.add_loss(tf.add_n(reg_losses))

        # Compile
        self.keras_model.compile(
            optimizer=optimizer,
            loss=[None] * len(self.keras_model.outputs))

        # Add metrics for losses
        for name in loss_names:
            if name in self.keras_model.metrics_names:
                continue
            layer = self.keras_model.get_layer(name)
            self.keras_model.metrics_names.append(name)
            loss = (
                tf.reduce_mean(layer.output, keepdims=True)
                * self.config.LOSS_WEIGHTS.get(name, 1.))
            self.keras_model.metrics_tensors.append(loss)

when adding losses, i commented out the following:

if layer.output in self.keras_model.losses: # is this conflicting?
   continue

since that was giving me an error.

I am using:

  • tensorflow: 2.2.0
  • keras: 2.3.1
  • h5py: 2.10.0

As a result of this and during my training, for each epoch, instead of getting something like :

Epoch 1/1
200/200 [==============================] - 612s 3s/step - loss: 2.8713 - rpn_class_loss: 0.2422 - rpn_bbox_loss: 0.7934 - mrcnn_class_loss: 0.3828 - mrcnn_bbox_loss: 0.7672 - mrcnn_mask_loss: 0.6857 - val_loss: 2.1704 - val_rpn_class_loss: 0.0662 - val_rpn_bbox_loss: 0.6589 - val_mrcnn_class_loss: 0.2957 - val_mrcnn_bbox_loss: 0.6043 - val_mrcnn_mask_loss: 0.5453

I am only getting:

Epoch 1/6
200/200 [==============================] - 647s 3s/step - loss: 2.5905 - val_loss: 1.0897

From my understanding, the self.keras_model.losses is empty so in principle that should not affect adding all those losses. Yet I am only seeing loss and val_loss

Any ideas how to fix this?

Thank you

1

There are 1 best solutions below

0
On

Change the tensorflow version to 1.14. The rpn and mrcnn losses will start coming along with train and validation ones.

You might struggle to downgrade the version. So, I would suggest->

To create a virtual environment-> install requirements.txt modules from matterport mrcnn github -> use matterport mrcnn folder without any changes in your code -> execute-> you will get all the losses during training.

Incase you get an Attribute error,

AttributeError: 'Model' object has no attribute 'metrics_tensors

You just need to do following changes in model.py file:

Modify self.keras_model.metrics_tensors.append(loss) -> self.keras_model.add_metric(loss, name)

GoodLuck!