I am getting the following error while I am trying to apply static quantization on a model. The error is in the fuse part of the code: torch.quantization.fuse_modules(model, modules_to_fuse):
model = torch.quantization.fuse_modules(model, modules_to_fuse)
File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 146, in fuse_modules
_fuse_modules(model, module_list, fuser_func, fuse_custom_config_dict)
File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 77, in _fuse_modules
new_mod_list = fuser_func(mod_list, additional_fuser_method_mapping)
File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 45, in fuse_known_modules
fuser_method = get_fuser_method(types, additional_fuser_method_mapping)
File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuser_method_mappings.py", line 132, in get_fuser_method
assert fuser_method is not None, "did not find fuser method for: {} ".format(op_list)
AssertionError: did not find fuser method for: (<class 'torch.nn.modules.conv.Conv2d'>,)
The modules_to_fuse list should obey the following rules:
I can not fuse a model for
'torch.nn.modules.conv.Conv2d'. It should be fused with like "cone, bn" or "conv,bn,relu" or "conv,relu" other combinations is not working. Use the above list to prepare your fusing list. It worked for me.Also here is another list of the fusing methods: