I wanted to try my seq2seq model in Tensorflow Serving for deployment. So, I have implemented a Custom operation
to replace a py_func to successfully export the python based code.
I have tested that C++ custom op in two ways.
- Using load_op_library
- Built Tensorflow serving from source
Both ways run successfully. But, the output and accuracy differs when running in the second way.
I have also used some printf()
inside the custom op implementation to debug the input and output values.
With load_op_library
, It prints in the stdout.
With TF Serving
, could not see any print
s in the stdout.
Why does running the same code in TF Serving does not give accurate results?
How can I debug the values when the model is running in TF Serving (could not see printf's results)?
Is there any way to inspect the values in
SavedModel
'svariables
files?