I'm currently experimenting with CoreImage, learning how to apply CIFilters to a camera feed. Presently I'm succeeded in taking a camera feed, applying a filter and writing the feed to an AVAssetWriter in the form of a video, but one issue I'm having with it is that during the filtering process, I actually crop the image data so that it always has square dimensions (needed for other aspects of the project)
My process is as follows:
- Capture feed using AVCaptureSession
- Take the CMSampleBufferRef from the capture output and acquire the CVPixelBufferRef
- Get the Base Address of the CVPixelBufferRef, and create a CGBitmapContext using the base address as its data (so we can overwrite it)
- Convert the CVPixelBufferRef to CIImage (using one of the CIImage constructors)
- Apply the filters to the CIImage
- Convert the CIImage to CGImageRef
- Draw the CGImageRef to the CGBitmapContext (resulting in the sample buffers content to be overwritten)
- Append the CMSampleBufferRef to the AVAssetWriterInput.
Without drawing the CGImageRef to the context, this is what I get:
After drawing the CGImageRef to the context, this is what I get:
Ideally, I just want to be able to tell the CMSampleBufferRef that it has new dimensions, so that the additional information is omitted. But I'm wondering if I'll have to create a new CMSampleBufferRef altogether.
Any help would be greatly appreciated!