I trained a model of Object Detection using Azure Custom Vision and the predictions done using 'quick test' in the portal are different from the ones obtained offline (with the sample code provided).
Project info
Domains: 'General (compact)'
Export Capabilities: 'Basic platforms (Tensorflow, CoreML, ONNX, ...)'
When I exported the file selected: "Tensorflow" > "Tensorflow"
Original Image
Quick Test (CV Portal)
I used a dataset of 50 images of cats only. In the portal I got the following result:
Offline Prediction
(For box visualization I added the following code to the main method in 'prediction.py' sample code provided in the export .zip file)
print("Predictions[0]", predictions[0]['boundingBox']['left'])
image_cv = cv2.imread(image_filename)
HEIGHT, WIDTH, channels = image_cv.shape
for i in range(len(predictions)):
print(predictions[i], "\n")
# TESTING#
x1, y1 = predictions[i]['boundingBox']['left'], predictions[i]['boundingBox']['top']
x2, y2 = x1 + predictions[i]['boundingBox']['width'], y1 + predictions[i]['boundingBox']['height']
x1, y1, x2, y2 = round(x1 * WIDTH), round(y1 * HEIGHT), round(x2 * WIDTH), round(y2 * HEIGHT)
image_cv = cv2.rectangle(image_cv, (x1, y1), (x2, y2), color=(0, 0, 0), thickness=1)
# show_names and show_percentage:
label = "%s : %.3f" % (predictions[i]['tagName'], predictions[i]['probability'])
b = np.array([x1, y1, x2, y2]).astype(int)
cv2.putText(image_cv, label, (b[0], b[1] - 10), cv2.FONT_HERSHEY_PLAIN, 1, (100, 0, 0), 3)
cv2.putText(image_cv, label, (b[0], b[1] - 10), cv2.FONT_HERSHEY_PLAIN, 1, (255, 255, 255), 1)
cv2.imwrite("Predicted-img.jpg", image_cv)
Can someone help me find what went wrong? I can provide the 'model.pb' file and the sample codes provided if needed. I'm not sure if I messed up the export settings or other little detail.
Here is the link to the repo with my project, it might help
And a Link to the Microsoft Q&A issue I opened