I have a trained net in tensorflow that i wish to use in gcloud ml-engine serving for prediction.
Predict gcloud ml serving should accept numpy array float32 type images with size of 320x240x3 and return 2 tiny matrices as an output.
Does anyone knows how should i create the input layers that would accept this kind of input type?
I have tried multiple ways, for example using base64 encoded json files, but casting the string into float type produces an error in which it's not supported:
"error": "Prediction failed: Exception during model execution: LocalError(code=StatusCode.UNIMPLEMENTED, details=\"Cast string to float is not supported\n\t [[Node: ToFloat = Cast[DstT=DT_FLOAT, SrcT=DT_STRING, _output_shapes=[[-1,320,240,3]], _device=\"/job:localhost/replica:0/task:0/cpu:0\"](ParseExample/ParseExample)]]\")"
This is an example of creating the json file (after saving the numpy array above as jpeg):
python -c 'import base64, sys, json; img = base64.b64encode(open(sys.argv[1], "rb").read()); print json.dumps({"images": {"b64": img}})' example_img.jpg &> request.json
And the tensorflow commands attempting to handle the input:
raw_str_input = tf.placeholder(tf.string, name='source')
feature_configs = {
'image': tf.FixedLenFeature(
shape=[], dtype=tf.string),
}
tf_example = tf.parse_example(raw_str_input, feature_configs)
input = tf.identity(tf.to_float(tf_example['image/encoded']), name='input')
the above is an example of one of the tests done, also tried multiple attempts of different tensorflow commands to handle the input but none of them worked...
If you're using binary data with predictions your input/output aliases must end in 'bytes'. So I think you need to do