Convert a tensor RT engine file back to source Onnx file, or pytorch model wights

1.5k Views Asked by At

I wanted to explore possible options for model conversions. Converting a pytorch model to onnx is pretty straightforward After that, is it possible to convert an onnx model file into a Tensor RT engine file using tensorrt python API

I wanted to get a general idea whether its even possible to do the reverse sequence of steps, suppose I only have an (unknown original model and weights) tensor rt engine file, and want to retrieve the model architecture and the model weights using only that file.. is there any way to do that?

Thanks in advance

I am new to tensorrt and I am searching for a source to get started with this

1

There are 1 best solutions below

0
On

TensorRT engine conversion is essentially making a tradeoff between generality and performance.

ONNX is trying to be as general format as possible. It is a common format to which you can export NN models from various NN model formats such as Pytorch or Tensorflow. It is trying to solve the problem: "How to convert NN models back and forth from one format to another".

TensorRT on the other hand is trying to optimize neural network models on Nvidia hardware. It is far from general: if you convert a model on your PC to TensorRT engine it will likely not work on my PC. The converter is doing changes to the model and measuring which changes make the model run faster on your specific combination of hardware, drivers, middleware and so on.

Even though all kinds of reverse engineering is possible I think the answer to your question is: no. It is not plausible to convert a TensorRT engine back to the ONNX model that it was made from. You can, however, make educated guesses about the model by inspecting the TensorRT engine. Read more about that here.

You can read more on how TensorRT works on their website.