Coreml model float input for a pytorch model

332 Views Asked by At

I have a pytorch model that takes 3 x width x height image as input with the pixel values normalized between 0-1

E.g., input in pytorch

img = io.imread(img_path)
input_img =  torch.from_numpy( np.transpose(img, (2,0,1)) ).contiguous().float()/255.0

I converted this model to coreml and exported an mlmodel which takes the input with correct dimensions

Image (Color width x height)

However, my predictions are incorrect as the model is expecting a float value between 0-1 and cvpixelbuffer is a int bwetween 0-255

I tried to normalize the values inside the model like so,

z = x.mul(1.0/255.0) # div op is not supported for export yet

However, when this op is done inside model at coreml level, int * float is casted as int and all values are essentially 0

Cast op is not supported for export e.g., x = x.float()

How can I make sure my input is properly shaped for prediction? Essentially, I want to take the pixel rgb and float divide 255.0 and pass it to the model for inference?

1

There are 1 best solutions below

0
On BEST ANSWER

I solved it using the coreml onnx coverter's preprocessing_args like so,

preprocessing_args= {'image_scale' : (1.0/255.0)}

Hope this helps someone