I have a seq2seq model in PyTorch that I want to run with CoreML. When exporting the model to ONNX the input dimensions are fixed to the shape of the tensor used during export, and again with the conversion from ONNX to CoreML.
import torch
from onnx_coreml import convert
x = torch.ones((32, 1, 1000)) # N x C x W
model = Model()
torch.onnx.export(model, x, 'example.onnx')
mlmodel = convert(model='example.onnx', minimum_ios_deployment_target='13')
mlmodel.save('example.mlmodel')
For the ONNX export you can export dynamic dimension -
torch.onnx.export(
model, x, 'example.onnx',
input_names = ['input'],
output_names = ['output'],
dynamic_axes={
'input' : {0 : 'batch', 2: 'width'},
'output' : {0 : 'batch', 1: 'owidth'},
}
)
But this leads to a RunTimeWarning
when converting to CoreML
-
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "compiler error: Blob with zero size found:
For inference in CoreML I would like the batch (first) and width (last) dimension to either be dynamic or have the ability to statically change them.
Is that possible?
The dimensions of the input can be made dynamic in ONNX by specifying
dynamic_axes
fortorch.onnx.export
.Now the exported model accepts inputs of size [batch, 1, width], where batch and width are dynamic.