Error when call prediction with base 64 input

108 Views Asked by At

I am using Tensorflow hub's example to export a saved_model to be serve with Tensorflow serving using Docker. (https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py)

I just followed some instruction on the internet and modified the export_model like below

def export_model(module_spec, class_count, saved_model_dir):
  """Exports model for serving.

  Args:
    module_spec: The hub.ModuleSpec for the image module being used.
    class_count: The number of classes.
    saved_model_dir: Directory in which to save exported model and variables.
  """
  # The SavedModel should hold the eval graph.
  sess, in_image, _, _, _, _ = build_eval_session(module_spec, class_count)

  # Shape of [None] means we can have a batch of images.
  image = tf.placeholder(shape=[None], dtype=tf.string)

  with sess.graph.as_default() as graph:
    tf.saved_model.simple_save(
        sess,
        saved_model_dir,
        #inputs={'image': in_image},
        inputs = {'image_bytes': image},
        outputs={'prediction': graph.get_tensor_by_name('final_result:0')},
        legacy_init_op=tf.group(tf.tables_initializer(), name='legacy_init_op')
    )

The problem is when i try to call the api using postman it came with this error

{
    "error": "Tensor Placeholder_1:0, specified in either feed_devices or fetch_devices was not found in the Graph"
}

Do I need to modify the retraining process so it can accept base64 input?

0

There are 0 best solutions below