How to use a custom loss function with neural compressor for distillation

160 Views Asked by At

I am trying out neural compressor (intel LPOT) to reduce the size of my CNN model implemented in pytorch. I intend to do distillation

The below is the code used to distill the model.

    from neural_compressor.experimental import Distillation, common
    from neural_compressor.experimental.common.criterion import PyTorchKnowledgeDistillationLoss
    distiller = Distillation(args.config)
    distiller.student_model = model
    distiller.teacher_model = teacher
    distiller.criterion = PyTorchKnowledgeDistillationLoss()
    distiller.train_func = train_func
    model = distiller.fit()

I wanted to change the loss fucntion to a different loss function i.e. I need to give a custom loss function which I have implemented in pytorch. Currently I see in the neural compressor I could change the loss function of teacher and student by providing arguments to the distiller.criterion i.e. by

    distiller.criterion = PyTorchKnowledgeDistillationLoss(loss_types=['CE', 'KL']) 

I assume this works because KullbackLeiblerDivergence and cross entropy loss are available in neural compressor is there any way to provide my custom loss function to distiller.criterion?

1

There are 1 best solutions below

0
AlekhyaV - Intel On

In neural compressor source there is a class called PyTorchKnowledgeDistillationLoss which has SoftCrossEntropy and KullbackLeiblerDivergence as member functions if you want to give your own custom loss function add a new member function to PyTorchKnowledgeDistillationLoss class, which takes in togits and targets as parameters,

eg

class PyTorchKnowledgeDistillationLoss(KnowledgeDistillationLoss):
...
...
    def customLossFunction(self, logits, targets):
        //calculate the custom loss
        return custom_loss

And then init function(constructor) of the PyTorchKnowledgeDistillationLoss assign

self.teacher_student_loss = self.customLossFunction
self.student_targets_loss= self.customLossFunction