Tensorrt build engine gives error with static input dimensions

1.1k Views Asked by At

I am trying to build a cuda engine using static dimensions and referring this documentation: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html

However, I keep seeing the below error:

[TensorRT] ERROR: (Unnamed Layer* 249) [PluginV2Ext]: PluginV2Layer must be V2DynamicExt when there are runtime input dimensions.

This error points to the runtime input dimensions, however, I need to specify and use the static dimensions instead. I searched in many online forums, however, they were all about using runtime dimensions and using optimization profile.

I also tried to create and configure an optimization profile with same MIN/OPT/MAX as shown below but that also didn't help.

profile = builder.create_optimization_profile();
profile.set_shape("foo", (1, 3, 100, 200), (1, 3, 100, 200), (1, 3, 100, 200)) 
config.add_optimization_profile(profile)

Could anyone please provide some points on how I can go about using static dimensions instead and disable the check for runtime input dimensions?

0

There are 0 best solutions below