I have been trying out intel extension for pytorch(ipex) to optimise my inference. I am using a pretrained model from torchvision. I wanted to compare improvement with and without ipex so I created a copy of model converted it to ipex.
Now I try to do inference with, both my original model and the model converted to ipex.
For the model converted to ipex I have no issues but for my original model I get the below error.
RuntimeError: Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same
This error looks like due to my actual model also getting converted to ipex how do I prevent from the actual model ie model_original not to be converted to ipex
Below is a minimum reproducer.
import intel_pytorch_extension as ipex
import torchvision
import torch
import torch.utils.data as Data
model = torchvision.models.resnet50(pretrained=True)
model.eval()
model_original=model #original pytorch model which does not use ipex
model_ipex=model
model_ipex.to(ipex.DEVICE) # a copy of a model converted to use IPEX
transform = torchvision.transforms.Compose([
torchvision.transforms.Resize((500, 400)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
dataset = torchvision.datasets.ImageFolder(
root='dataset',
transform=transform,
)
loader = Data.DataLoader(
dataset=dataset,
batch_size=1
)
for data, target in loader: #inference with ipex this works fine
print(target)
output=model_ipex(data)
for data, target in loader: #inference with original model this fails
print(target)
output=model_original(data)
When you try to assign a pytorch model to a new variable as in your code
model_ipex=modelthe copying is done with a shallow copy so your model and model_ipex point to the same memory. Which means any change you make in the copy of the model, model_original also change. Which means you offload the model to use ipex withmodel_ipex.to(ipex.DEVICE)your original model is also offloaded and the weights of your model become ofXPUFloatTypedatatype to which you are trying to pass an input oftorch.FloatTensorwhich causes the issue.If you need to have to separate copies of the model you should consider using deep copy.
https://www.geeksforgeeks.org/copy-python-deep-copy-shallow-copy/