I want to use a model in my Triton Inference Server model repository in another custom Python model that I have in the same repository. Is it possible? If yes, how to do that?
I guess it could be done with Building Custom Python Backend Stub, but I was wondering if there is a simpler way.
Yes.
You can construct InferenceRequest and call exec() method to use another model in the model repository.
Here is code snippet:
Here is a relatively complete example.
You can find more reference here: https://github.com/triton-inference-server/python_backend#business-logic-scripting-beta