Input dimension reshape when using PyTorch model with CoreML

6k Views Asked by At

I have a seq2seq model in PyTorch that I want to run with CoreML. When exporting the model to ONNX the input dimensions are fixed to the shape of the tensor used during export, and again with the conversion from ONNX to CoreML.

import torch
from onnx_coreml import convert

x = torch.ones((32, 1, 1000))  # N x C x W
model = Model()
torch.onnx.export(model, x, 'example.onnx')

mlmodel = convert(model='example.onnx', minimum_ios_deployment_target='13')
mlmodel.save('example.mlmodel')

For the ONNX export you can export dynamic dimension -

torch.onnx.export(
    model, x, 'example.onnx',
    input_names = ['input'],
    output_names = ['output'],
    dynamic_axes={
        'input' : {0 : 'batch', 2: 'width'},
        'output' : {0 : 'batch', 1: 'owidth'},
    }
)

But this leads to a RunTimeWarning when converting to CoreML -

RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "compiler error: Blob with zero size found:

For inference in CoreML I would like the batch (first) and width (last) dimension to either be dynamic or have the ability to statically change them.

Is that possible?

1

There are 1 best solutions below

2
On

The dimensions of the input can be made dynamic in ONNX by specifying dynamic_axes for torch.onnx.export.

torch.onnx.export(
    model,
    x,
    'example.onnx',
    # Assigning names to the inputs to reference in dynamic_axes
    # Your model only has one input: x
    input_names=["input"],
    # Define which dimensions should be dynamic
    # Names of the dimensions are optional, but recommended.
    # Could just be: {"input": [0, 2]}
    dynamic_axes={"input": {0: "batch", 2: "width"}}
)

Now the exported model accepts inputs of size [batch, 1, width], where batch and width are dynamic.