How to pre-compile TensorRT models on a different machine than what would be used for inference?

44 Views Asked by At

A neural model runs on edge devices with various hardware configurations.

I want to pre-compile the model, so it could be deployed in TensorRT format, without needing to compile on the edge.

  1. How similar should the hardware on the computer that compiles the model and the edge be? For example, can I use a Nvidia RTX 4060 GPU to compile a model that would run on Nvidia RTX 4090?
  2. How to generate a good identifier for the compiled model, preferrably using pytorch code? (mymodel_ada_lovelace.trt or mymodel_4090.trt or something else?)
  3. Are there cloud services that would do TensorRT compilation for given hardware?
0

There are 0 best solutions below