If my model is taking 80ms to process each frame (12.5fps) but the camera is running at 30fps, how does CameraInference handle the next frame to process?
Does it grab the current the current frame just before processing? Or does it pull an earlier frame from a frame buffer?
Code example:
with PiCamera() as camera:
camera.sensor_mode = 4
camera.resolution = (1640, 1232)
camera.framerate = 30
camera.start_preview()
with CameraInference(my_model.model()) as inference:
for i, result in enumerate(inference.run()):
if i == args.num_frames:
break
print('frame: {}, dur: {}, result: {}'.format(i, result.duration_ms, result.tensors['y'].data[0]))
camera.stop_preview()
Inference pipeline on Vision Bonnet drops frames while inference is running on the current frame. You always get the most recent inference result from the Python API. Of course, if
inference_fps > camera_fps
, then there are no drops.