In Apple's documentation for AVAssetReaderTrackOutput, it indicates the following about the parameter for outputSettings when instantiating an instance using +[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:outputSettings:]
A value of nil configures the output to vend samples in their original format as stored by the specified track.
When using it on e.g. an MP4 video asset, it will seemingly step through frames in decode order (i.e. out-of-order with respect to display), however all queries to delivered CMSampleBufferRef objects using CMSampleBufferGetImageBuffer yields NULL CVImageBufferRef objects.
The only way I can ensure delivery of image buffer objects is to provide a pixel buffer format to outputSettings:, such as kCVPixelFormatType_32ARGB for the kCVPixelBufferPixelFormatTypeKey dictionary entry.
Another interesting side-effect of doing this, is that frames are then delivered in display order, and the underlying decode-order of frames is abstracted/hidden away.
Any ideas why this is so?
Like you I expected that setting an
outputSettingsofnilwould result in output of native format video frames but this is not the case, you must specify something in order to get a validCVSampleBufferRef.All is not lost, using a "barely there" dictionary seems to output frames in their native format,
IOSurfaceOptions are simply default - further reading for reference: https://developer.apple.com/documentation/corevideo/kcvpixelbufferiosurfacepropertieskey?language=objc