I use python get my onnx input shape
providers = ['AzureExecutionProvider', 'CPUExecutionProvider'] # Specify your desired providers
sess_options = onnxruntime.SessionOptions()
sess = onnxruntime.InferenceSession(model_path, sess_options, providers=providers)
input_shape = sess.get_inputs()[0].shape
print(f"Input shape: {input_shape}")
it show
Input shape: ['input_dynamic_axes_1', 'input_dynamic_axes_2', 'input_dynamic_axes_3', 'input_dynamic_axes_4']
when I run onnx in js
const session = await ort.InferenceSession.create(model, {
executionProviders: ["webgpu", "webgl"],
});
const feeds: any = {};
const inputNames = session.inputNames;
feeds[inputNames[0]] = inputTensor;
const results = await session.run(feeds);
const outputData = results[session.outputNames[0]].data;
return outputData as any;
it raise err
Uncaught (in promise) Error: input tensor[0] check failed: expected shape '[,,,]' but got [1,3,800,400]
validateInputTensorDims
normalizeAndValidateInputs
(anonymous function)
event
run
run
run
runInference
I think the reason is onnx input shape are dynamic, so following onnx js code always set expectedDim == [null, null,null,null]
private validateInputTensorDims(
graphInputDims: Array<readonly number[]>, givenInputs: Tensor[], noneDimSupported: boolean) {
for (let i = 0; i < givenInputs.length; i++) {
const expectedDims = graphInputDims[i];
const actualDims = givenInputs[i].dims;
if (!this.compareTensorDims(expectedDims, actualDims, noneDimSupported)) {
throw new Error(`input tensor[${i}] check failed: expected shape '[${expectedDims.join(',')}]' but got [${
actualDims.join(',')}]`);
}
}
}
So my question is: how to call onnx in onnx runtime web with dynamic input shape(ignoring input shape check)