How to implement a CMSampleBuffer for MLkit facial detection?

267 Views Asked by At

Basically, I'm trying to create a simple real-time facial recognition IOS app that streams the users face and tells them whether their eyes are closed. I'm following the google tutorial here - https://firebase.google.com/docs/ml-kit/ios/detect-faces. I'm on step 2 (Run the Face Detector) and I'm trying to create a visionImage using the CMSampleBufferRef. I'm basically just copying the code and when I do, there is no reference to "sampleBuffer" as shown in the tutorial. I don't know what to do as I really don't understand the CMSampleBuffer stuff.

1

There are 1 best solutions below

0
Dong Chen On

ML Kit has a Quickstart app showing how to do that. Here is the code:

https://github.com/firebase/quickstart-ios/tree/master/mlvision