Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers

1.3k Views Asked by At

Update #1 (original question and details below):

As per the suggestion of @MatthijsHollemans below I've tried to run this by removing dynamic_axes from the initial create_onnx step below. This removed both:

Description of image feature 'input_image' has missing or non-positive width 0.

and

Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.

Unfortunately this opens up two sub-questions:

  • I still want to have a functional ONNX model. Is there a more appropriate way to make H and W dynamic? Or should I be saving two versions of the ONNX model, one without dynamic_axes for the CoreML conversion, and one with for use as a valid ONNX model?

  • Although this solves the compilation error in xcode (specified below) it introduces the following runtime issues:

Finalizing CVPixelBuffer 0x282f4c5a0 while lock count is 1.
[espresso] [Espresso::handle_ex_plan] exception=Invalid X-dimension 1/480 status=-7
[coreml] Error binding image input buffer input_image: -7
[coreml] Failure in bindInputsAndOutputs.

I am calling this the same way I was calling the fixed size model, which does still work fine. The image dimensions are 640 x 480.

As specified below the model should accept any image between 64x64 and higher.

For flexible shape models, do I need to provide an input differently in xcode?


Original Question (parts still relevant)

I have been slowly working on converting a style transfer model from pytorch > onnx > coreml. One of the issues that has been a struggle is flexible/dynamic input + output shape.

This method (besides i/o renaming) has worked well on iOS 12 & 13 when using a static input shape.

I am using the following code to do the onnx > coreml conversion:

def create_coreml(name):
    mlmodel = convert(
            model="onnx/" + name + ".onnx",
            preprocessing_args={'is_bgr': True},
            deprocessing_args={'is_bgr': True},
            image_input_names=['input_image'],
            image_output_names=['stylized_image'],
            minimum_ios_deployment_target='13'
            )

    spec = mlmodel.get_spec()

    img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange()

    img_size_ranges.add_height_range((64, -1))
    img_size_ranges.add_width_range((64, -1))

    flexible_shape_utils.update_image_size_range(
        spec,
        feature_name='input_image',
        size_range=img_size_ranges)

    flexible_shape_utils.update_image_size_range(
        spec,
        feature_name='stylized_image',
        size_range=img_size_ranges)

    mlmodel = coremltools.models.MLModel(spec)

    mlmodel.save("mlmodel/" + name + ".mlmodel")

Although the conversion 'succeeds' there are a couple of warnings (spaces added for readability):

Translation to CoreML spec completed. Now compiling the CoreML model.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111: 
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: 
Error compiling model: 
"Error reading protobuf spec. validator error: Description of image feature 'input_image' has missing or non-positive width 0.".
  RuntimeWarning)

Model Compilation done.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111: 
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: 
Error compiling model: 
"compiler error:  Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
".
  RuntimeWarning)

If I ignore these warnings and try to compile the model for latest targets (13.0) I get the following error in xcode:

coremlc: Error: compiler error:  Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.

Here is what the problematic area appears to look like in netron:

enter image description here


My main question is how can I get these two warnings out of the way?

Happy to provide any other details.

Thanks for any advice!


Below is my pytorch > onnx conversion:

def create_onnx(name):
    prior = torch.load("pth/" + name + ".pth")
    model = transformer.TransformerNetwork()
    model.load_state_dict(prior)

    dummy_input = torch.zeros(1, 3, 64, 64) # I wasn't sure what I would set the H W to here?

    torch.onnx.export(model, dummy_input, "onnx/" + name + ".onnx",
                      verbose=True,
                      opset_version=10,
                      input_names=["input_image"], # These are being renamed from garbled originals.
                      output_names=["stylized_image"], # ^
                      dynamic_axes={'input_image':
                                    {2: 'height', 3: 'width'},
                                    'stylized_image':
                                    {2: 'height', 3: 'width'}}
                      )

    onnx.save_model(original_model, "onnx/" + name + ".onnx")
0

There are 0 best solutions below