I only found examples mainly using the live camera capture to apply the vision framework, which I already have working. I also want to apply the body pose detection and drawing upon video playback. I have the following code which already plays back a video stored on my device.
let videoURL = URL(fileURLWithPath: NSString.path(withComponents: [documentsDirectory, path]) as String)
let player = AVPlayer(url: videoURL)
let vc = AVPlayerViewController()
vc.player = player
present(vc, animated: true) {
vc.player?.play()
}
How can I send a modified version of the video to the player which uses something like this to first detect persons in the video using the Vision Framework:
let visionRequestHandler = VNImageRequestHandler(cgImage: frame)
// Use Vision to find human body poses in the frame.
do { try visionRequestHandler.perform([humanBodyPoseRequest]) } catch {
assertionFailure("Human Pose Request failed: \(error)")
}
let poses = Pose.fromObservations(humanBodyPoseRequest.results)
on each frame of the video and then draw each pose onto the respective video frame before sending it to the AVPlayer
pose.drawWireframeToContext(cgContext, applying: pointTransform)
I'll leave this here for other people to find. But the AVMutableVideoComposition with the already existing code and slight transformations to make it work with CIImage from the Detecting Body Poses example did the trick. Thanks for the comments.