I am trying to use the azure face api to verify faces but the below code is giving me a 404 resource not found error. What am I doing wrong? With the same subscription key and endpoint I am able to detect face from url and match it. But my needs are to read image from webcam and match it with a template image.
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
import cv2
import requests
# This key will serve all examples in this document.
KEY = "d06a1f0aae344d7cac11f78eef5abf37"
# This endpoint will be used in all examples in this quickstart.
ENDPOINT = "https://neurohome-facerecognition.cognitiveservices.azure.com/face/v1.0/detect/"
ENDPOINT_verify = "https://neurohome-facerecognition.cognitiveservices.azure.com/face/v1.0/verify"
# Create an authenticated FaceClient.
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
face_client_verify = FaceClient(ENDPOINT_verify, CognitiveServicesCredentials(KEY))
headers = {
'Content-Type':'application/octet-stream',
'Ocp-Apim-Subscription-Key':KEY}
headers_verify = {'Content-Type':'application/json',
'Ocp-Apim-Subscription-Key':KEY}
api_url = ENDPOINT
params = {
'returnFaceLandmarks':True,
'returnFaceAttributes':'emotion,age,gender' }
# Base url for the Verify and Facelist/Large Facelist operations
IMAGE_BASE_PATH = 'C:/FaceRecognition/template/'
# Create a list to hold the target photos of the same person
#target_image_file_names = ['Family1-Dad1.jpg', 'Family1-Dad2.jpg']
# The source photos contain this person
source_image_file_name = 'template_viswa.jpg'
local_image = cv2.imread(IMAGE_BASE_PATH + source_image_file_name)
img = cv2.imencode('.jpg', local_image)[1].tobytes()
# Detect face(s) from source image 1, returns a list[DetectedFaces]
# We use detection model 3 to get better performance.
#detected_faces = face_client.face.detect_with_stream(img)
detected_faces = requests.post(api_url,params=params,headers=headers,data=img)
faceId_source = detected_faces.json()[0]['faceId']
# Add the returned face's face ID
cap = cv2.VideoCapture(0)
while True:
ret,frame = cap.read()
imgnew = cv2.imencode('.jpg',frame)[1].tobytes()
detected_faces_dest = requests.post(ENDPOINT,params = params,headers=headers,data=imgnew)
faceId_dest = detected_faces_dest.json()[0]['faceId']
`face_client_verify.face.verify_face_to_face(faceId_source, faceId_dest)`
If you are using Azure face SDK for python to verify faces, the
ENDPOINT_verify
forFaceClient
should be:https://neurohome-facerecognition.cognitiveservices.azure.com/
instead of
https://neurohome-facerecognition.cognitiveservices.azure.com/face/v1.0/verify
Just try the code below to create a face client for verification:
Try code blow to have a quick test: