running deeplab v3+ with tensorRT

1.6k Views Asked by At

i'm trying to optimize a deeplab v3+ model using tensorRT and i get the following errors:

    UFF Version 0.5.5
=== Automatically deduced input nodes ===
[name: "ImageTensor"
op: "Placeholder"
attr {
  key: "_output_shapes"
  value {
    list {
      shape {
        dim {
          size: 1
        }
        dim {
          size: -1
        }
        dim {
          size: -1
        }
        dim {
          size: 3
        }
      }
    }
  }
}
attr {
  key: "dtype"
  value {
    type: DT_UINT8
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

=== Automatically deduced output nodes ===
[name: "Squeeze_1"
op: "Squeeze"
input: "resize_images/ResizeNearestNeighbor"
attr {
  key: "T"
  value {
    type: DT_INT64
  }
}
attr {
  key: "_output_shapes"
  value {
    list {
      shape {
        dim {
          size: 1
        }
        dim {
          size: -1
        }
        dim {
          size: -1
        }
      }
    }
  }
}
attr {
  key: "squeeze_dims"
  value {
    list {
      i: 3
    }
  }
}
]
==========================================

Using output node Squeeze_1
Converting to UFF graph
Warning: No conversion function registered for layer: ResizeNearestNeighbor yet.
Converting resize_images/ResizeNearestNeighbor as custom op: ResizeNearestNeighbor
Warning: No conversion function registered for layer: ExpandDims yet.
Converting ExpandDims_1 as custom op: ExpandDims
Warning: No conversion function registered for layer: Slice yet.
Converting Slice as custom op: Slice
Warning: No conversion function registered for layer: ArgMax yet.
Converting ArgMax as custom op: ArgMax
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting ResizeBilinear_2 as custom op: ResizeBilinear
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting ResizeBilinear_1 as custom op: ResizeBilinear
Traceback (most recent call last):
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\iariav\Anaconda3\envs\tensorflow\Scripts\convert-to-uff.exe\__main__.py", line 9, in <module>
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\bin\convert_to_uff.py", line 89, in main
    debug_mode=args.debug
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py", line 187, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py", line 157, in from_tensorflow
    debug_mode=debug_mode)
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\converters\tensorflow\converter.py", line 94, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\converters\tensorflow\converter.py", line 72, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: 'logits/semantic/biases/read'

from what i understand, is this caused by some layers which are not supported by the uff converter? has anyone succeeded in converting a deeplab model to uff? i'm using the original deeplabv3+ model in tensorflow.

thanks

2

There are 2 best solutions below

2
On

yeah sometimes getting a specific model to work in TensorRT is a bit tricky due to layer support. With the new TensorRT 5GA these are the supported layers (taken from the Developer Guide):

Tensorflow Supported Layers

Ask you can see you have some layers like ResizeNearestNeighbor, ResizeBilinear and ArgMax, your best approach and what I ended up doing is porting the network to a certain point and use the cpp API to create the layers I needed. Check the IPluginV2 and the IPluginCreator and see if you can implement the layers yourself.

I think with time more layer support will roll out but I guess that if you can't wait just give it a try yourself.

0
On

I have run deeplabv3+ model on Jetson Nano using TF-TRT. As per TensorRT release notes

Deprecation of Caffe Parser and UFF Parser - We are deprecating Caffe Parser and UFF Parser in TensorRT 7. They will be tested and functional in the next major release of TensorRT 8, but we plan to remove the support in the subsequent major release. Plan to migrate your workflow to use tf2onnx, keras2onnx or TensorFlow-TensorRT (TF-TRT) for deployment.

Using TF-TRT, I could get optimized TensorRT graph and it was running succesfully even after re-training for my dataset.

Also, if some operators are not supported in the version which you are using then for those specific operators the execution fallbacks to tensorflow. This means there would not be any error in execution, only the level of optimization will be less.

References:

  1. TF-TRT user guide: https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#integrate-ovr
  2. Tensorflow blog: https://blog.tensorflow.org/2019/06/high-performance-inference-with-TensorRT.html