Get media with AVDepthData without an iPhone 7+

1.2k Views Asked by At

What's the best way to build apps for the AVDepthData without owning an iPhone 7+?

The depth data can only be captured on the iPhone 7+ that has the dual lenses camera. But I guess any iOS11 device can handle the depth data provided it has access to photos that contain it. I could not find any such media resource from Apple or other parties online. Does anyone has some? Or is there a better way?

I've tried to look into the iPhone 7+ simulator library, but the simulator crashes because it's not supporting Metal that the depth demo apps are using.

3

There are 3 best solutions below

0
On BEST ANSWER

You will need someone (like me) that has an iPhone 7+ with iOS 11 on it to send you an image.

Visit this link with Safari on iOS 11 and tap more... -> Save Image

http://hellocamera.co/img/depth-photo.heic

Note: I removed the gps data from this image.

2
On

I guess you can handle depth data of photos on any iOS device. All you need is samples of photos taken by iPhone 7+. Here are a few of them.

1
On

Though a non-trival task, it is possible to generate AVDepthData and add it to your own image.

  1. create a depth/disparity dictionary like that documented in CGImageSource.h CGImageSourceCopyAuxiliaryDataInfoAtIndex- however, following are more details:

Key kCGImageAuxiliaryDataInfoData - (CFDataRef) - the depth data

Contains just a binary pixel buffer. As in it’s the data you pull out of a pixel buffer by reading the pointer in CVPixelBufferLockBaseAddress. You create the CVPixelBuffer with a format of one of the supported types:

  • kCVPixelFormatType_DisparityFloat16 = 'hdis', /* IEEE754-2008 binary16 (half float), describing the normalized shift when comparing two images. Units are 1/meters: ( pixelShift / (pixelFocalLength * baselineInMeters) ) */
  • kCVPixelFormatType_DisparityFloat32 = 'fdis', /* IEEE754-2008 binary32 float, describing the normalized shift when comparing two images. Units are 1/meters: ( pixelShift / (pixelFocalLength * baselineInMeters) ) */
  • kCVPixelFormatType_DepthFloat16 = 'hdep', /* IEEE754-2008 binary16 (half float), describing the depth (distance to an object) in meters */
  • kCVPixelFormatType_DepthFloat32 = 'fdep', /* IEEE754-2008 binary32 float, describing the depth (distance to an object) in meters */

To turn an arbitrary grayscale image into a fake depth buffer, you’ll need to per-pixel convert whatever your grayscale pixel values are (0=black to 1=white, zNear to zFar, etc) to either meters or 1/meters depending on your target format. And get them into the right floating-point format, depending on where you’re getting them from.

Key kCGImageAuxiliaryDataInfoDataDescription - (CFDictionary) - the depth data description

Tells you how to interpret that buffer for one we give you, or tells us how to interpret a buffer you give us:

  • kCGImagePropertyPixelFormat is one of the CoreVideo/CVPixelBuffer.h depth/disparity formats
  • kCGImagePropertyWidth/Height are the pixel dimensions
  • kCGImagePropertyBytesPerRow is correctly what it says on the tin

Key kCGImageAuxiliaryDataInfoMetadata - (CGImageMetadataRef) - metadata

This value is optional.

  1. Create AVDepthData with init(fromDictionaryRepresentation: [AnyHashable : Any]) passing the dictionary created above.
  2. Create an image using ImageI/O:

    // create the image destination (not shown)
    
    // add an image to the destination
    CGImageDestinationAddImage(cgImageDestination, renderedCGImage, attachments)  
    
    // Use AVDepthData to get auxiliary data dictionary          
    

    var auxDataType :NSString? 

    let auxData = depthData.dictionaryRepresentation(forAuxiliaryDataType: &auxDataType)  

    // Add auxiliary data to image destination  
    CGImageDestinationAddAuxiliaryDataInfo(cgImageDestination, auxDataType!, auxData! as CFDictionary)  
    
    if CGImageDestinationFinalize(cgImageDestination) {  
        return data as Data  
    }