iOS: Cropping a CMSampleBufferRef before appending to AVAssetWriterInput

1.9k Views Asked by At

I'm currently experimenting with CoreImage, learning how to apply CIFilters to a camera feed. Presently I'm succeeded in taking a camera feed, applying a filter and writing the feed to an AVAssetWriter in the form of a video, but one issue I'm having with it is that during the filtering process, I actually crop the image data so that it always has square dimensions (needed for other aspects of the project)

My process is as follows:

  1. Capture feed using AVCaptureSession
  2. Take the CMSampleBufferRef from the capture output and acquire the CVPixelBufferRef
  3. Get the Base Address of the CVPixelBufferRef, and create a CGBitmapContext using the base address as its data (so we can overwrite it)
  4. Convert the CVPixelBufferRef to CIImage (using one of the CIImage constructors)
  5. Apply the filters to the CIImage
  6. Convert the CIImage to CGImageRef
  7. Draw the CGImageRef to the CGBitmapContext (resulting in the sample buffers content to be overwritten)
  8. Append the CMSampleBufferRef to the AVAssetWriterInput.

Without drawing the CGImageRef to the context, this is what I get:

enter image description here

After drawing the CGImageRef to the context, this is what I get:

enter image description here

Ideally, I just want to be able to tell the CMSampleBufferRef that it has new dimensions, so that the additional information is omitted. But I'm wondering if I'll have to create a new CMSampleBufferRef altogether.

Any help would be greatly appreciated!

0

There are 0 best solutions below