How to create video from ARKit face session depth pixel buffer

723 Views Asked by At

I'm trying to append frame.capturedDepthData.depthDataMap to AVAssetWriterInputPixelBufferAdaptor but the result is always unsuccessful.

My adaptor is configured like this:

NSError* error;
videoWriter = [AVAssetWriter.alloc initWithURL:outputURL fileType:AVFileTypeMPEG4 error:&error];
if (error)
{
    NSLog(@"Error creating video writer: %@", error);
    return;
}

NSDictionary* videoSettings = @{
        AVVideoCodecKey: AVVideoCodecTypeH264,
        AVVideoWidthKey: @640,
        AVVideoHeightKey: @360
};

writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.transform = CGAffineTransformMakeRotation(M_PI_2);

NSDictionary* sourcePixelBufferAttributesDictionary = @{
        (NSString*) kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_DepthFloat32)
};

adaptor = [AVAssetWriterInputPixelBufferAdaptor
        assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                   sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

if ([videoWriter canAddInput:writerInput])
{
    [videoWriter addInput:writerInput];
}
else
{
    NSLog(@"Error: cannot add writerInput to videoWriter.");
}

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

and then on every session:(ARSession*)session didUpdateFrame:(ARFrame*)frame callback I try to append depth pixel buffer like this:

if (!adaptor.assetWriterInput.readyForMoreMediaData)
{
    NSLog(@"Asset input writer is not ready for more media data!");
}
else
{
    if (frame.capturedDepthData.depthDataMap != NULL)
    {
        frameCount++;
        CVPixelBufferRef pixelRef = frame.capturedDepthData.depthDataMap;
        BOOL result = [adaptor appendPixelBuffer:frame.capturedDepthData.depthDataMap withPresentationTime:CMTimeMake(frameCount, 15)];
    }
}

but the result from appending pixel buffer is always FALSE.

Now, if I try to append frame.capturedImage to a properly configured adaptor, that will always succeed and that's how I'm currently making a video file from front camera.

But I wonder how can I make video from depth pixel buffer?

1

There are 1 best solutions below

1
On BEST ANSWER

Here is an example of how to convert depthDataMap pixel buffer to a pixel buffer valid for appending to adaptor:

- (void) session:(ARSession*)session didUpdateFrame:(ARFrame*)frame
{
    CVPixelBufferRef depthDataMap = frame.capturedDepthData.depthDataMap;

    if(!depthDataMap)
    {
        // no depth data available
        return;
    }

    CIImage* image = [CIImage imageWithCVPixelBuffer:depthDataMap];
    CVPixelBufferRef buffer = NULL;
    CVReturn err = PixelBufferCreateFromImage(image, &buffer);

    [adaptorDepth appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frameDepthCount, 15)] // 15 is number of fps
}


CVReturn PixelBufferCreateFromImage(CIImage* ciImage, CVPixelBufferRef* outBuffer) {
    CIContext* context = [CIContext context];

    NSDictionary* attributes = @{ (NSString*) kCVPixelBufferCGBitmapContextCompatibilityKey: @YES,
                                  (NSString*) kCVPixelBufferCGImageCompatibilityKey: @YES
    };

    CVReturn err = CVPixelBufferCreate(kCFAllocatorDefault,
                                       (size_t) ciImage.extent.size.width, (size_t) ciImage.extent.size.height,
                                       kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef _Nullable) (attributes),
                                       outBuffer);
    if (err)
    {
        return err;
    }

    if (outBuffer)
    {
        [context render:ciImage toCVPixelBuffer:*outBuffer];
    }

    return kCVReturnSuccess;
}

The key is in PixelBufferCreateFromImage method which is able to create a valid pixel buffer from CIImage from original depth pixel buffer.