I am trying to create automatic attendance system with opencv2 in which i need to get rtsp stream from IP camera, find faces from it and recognize face.
I created different threads from frame catching and drawing because face recognition function needs some time to recognize face.
But just creating 2 threads, one for frame reading and other for drawing uses around 70% CPU. and creating pytorch_facenet model increase usage 80-90% CPU.
does anyone know how to reduce CPU usage ?
my program:
import cv2
import threading
from facenet_pytorch import InceptionResnetV1
cap = cv2.VideoCapture("rtsp://test:[email protected]")
resnet = InceptionResnetV1(pretrained='vggface2').eval()
ret, frame = cap.read()
exit = False
def th1():
global ret, frame, exit
while True:
ret, frame = cap.read()
if exit:
break
def th2():
global ret, frame, exit
while True:
cv2.imshow('frame', frame)
cv2.waitKey(1)
if cv2.getWindowProperty('frame',cv2.WND_PROP_VISIBLE) < 1:
exit = True
break
t1 = threading.Thread(target=th1)
t1.start()
t2 = threading.Thread(target=th2)
t2.start()
Update:
I used time.sleep(0.2) in my all threads except frame reading. and it worked, my cpu usage is 30% now.
Two issues.
th2
runs in an almost-tight-loop. It won't consume a whole core of CPU becausewaitKey(1)
sleeps for some time.No synchronization at all between threads, but you need it. You need a
threading.Event
to notify the consumer thread of a fresh frame. The consumer thread must wait until a fresh frame is available, because it's pointless to display the same old frame again and again. You can be lazy and usewaitKey(30)
instead. For the displaying thread, that's good enough.VideoCapture
. You don't do any error checking at all! You must check:and