Install tensorrt with custom plugins

2k Views Asked by At

I'm able to install the desired version of TensorRT from official nvidia guide (https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install)

sudo apt-get update && \
     apt-get install -y libnvinfer7=7.1.3-1+cuda10.2 libnvonnxparsers7=7.1.3-1+cuda10.2 libnvparsers7=7.1.3-1+cuda10.2 libnvinfer-plugin7=7.1.3-1+cuda10.2 libnvinfer-dev=7.1.3-1+cuda10.2 libnvonnxparsers-dev=7.1.3-1+cuda10.2 libnvparsers-dev=7.1.3-1+cuda10.2 libnvinfer-plugin-dev=7.1.3-1+cuda10.2 python3-libnvinfer=7.1.3-1+cuda10.2 && \
sudo apt-mark hold libnvinfer7 libnvonnxparsers7 libnvparsers7 libnvinfer-plugin7 libnvinfer-dev libnvonnxparsers-dev libnvparsers-dev libnvinfer-plugin-dev python3-libnvinfer

But I need some custom plugins. Fortunately I found the desired and added to folder plugin https://github.com/NVIDIA/TensorRT/tree/master/plugin and registered it. Now I do not understand how to build and install tensorrt with added plugin. On the official repo on github https://github.com/NVIDIA/TensorRT there is an instruction, but it describes steps to build a docker image with tensorrt.

So the question is how to build tensorrt with custom plugin and install it on ubuntu?

2

There are 2 best solutions below

0
On BEST ANSWER

It's quite easy to "install" custom plugin if you registered it. So the steps are the following:

  1. Install tensorRT

    sudo apt-get update && \
       apt-get install -y libnvinfer7=7.1.3-1+cuda10.2 libnvonnxparsers7=7.1.3-1+cuda10.2 libnvparsers7=7.1.3-1+cuda10.2 libnvinfer-plugin7=7.1.3-1+cuda10.2 libnvinfer-dev=7.1.3-1+cuda10.2 libnvonnxparsers-dev=7.1.3-1+cuda10.2 libnvparsers-dev=7.1.3-1+cuda10.2 libnvinfer-plugin-dev=7.1.3-1+cuda10.2 python3-libnvinfer=7.1.3-1+cuda10.2 && \
    sudo apt-mark hold libnvinfer7 libnvonnxparsers7 libnvparsers7 libnvinfer-plugin7 libnvinfer-dev libnvonnxparsers-dev libnvparsers-dev libnvinfer-plugin-dev python3-libnvinfer
    

    Note: I installed v.7.1.3.1 of tensorrt and cuda 10.2 if you want to install other version change it but be careful the version of tensorRT and cuda match in means that not for all version of tensorRT there is the version of cuda

  2. Build the library libnvinfer_plugin.so.x.x.x as described at https://github.com/NVIDIA/TensorRT

    Note: x.x.x is the version of library in my case is 7.1.3

  3. Delete existing libraries at /usr/lib/x86_64-linux-gnu if you have x86 architecture or /usr/lib/aarch64-linux-gnu for arm64: libnvinfer_plugin.so.7.1.3 libnvinfer_plugin.so.7 libnvinfer_plugin.so

    Again file names depends on tensorRT version.

  4. Copy the library libnvinfer_plugin.so.7.1.3 to folder /usr/lib/x86_64-linux-gnu if you have x86 architecture or /usr/lib/aarch64-linux-gnu for arm64

  5. Make simlinks for libraries:

    sudo ln -s  libnvinfer_plugin.so.7 
    sudo ln -s libnvinfer_plugin.so.7 libnvinfer_plugin.so
    
0
On

The best way to install the plugin is to follow the official documentation of tensorrt-plugin-generator repository. You just have to write some class functions which cannot be automatically generated. The rest of the script as well as the Makefile is generated automatically by a python script and yaml file.

No need of linking nvinfer_plugin libraries. This is also a better and fastest way of generating plugin without compiling the whole nvinfer_plugin library again from TensorRT repository.

You can include the custom plugins library (*.so) generated by Makefile to trtexec using --plugin command