How to debug models running in Tensorflow Serving?

1k Views Asked by At

I wanted to try my seq2seq model in Tensorflow Serving for deployment. So, I have implemented a Custom operation to replace a py_func to successfully export the python based code.

I have tested that C++ custom op in two ways.

  1. Using load_op_library
  2. Built Tensorflow serving from source

Both ways run successfully. But, the output and accuracy differs when running in the second way.

I have also used some printf() inside the custom op implementation to debug the input and output values.

With load_op_library, It prints in the stdout.

With TF Serving, could not see any prints in the stdout.

  1. Why does running the same code in TF Serving does not give accurate results?

  2. How can I debug the values when the model is running in TF Serving (could not see printf's results)?

  3. Is there any way to inspect the values in SavedModel's variables files?

0

There are 0 best solutions below