Transforming CGPoint results returned from CIFaceFeature

1.6k Views Asked by At

I am trying to figure out how to transform the CGPoint results returned from CIFaceFeature in order to draw with them in a CALayer. Previously I had normalized my image to have 0 rotation in order to make things easier but that causes problems for images taken with the device held in landscape mode.

I've been working at this for a while without success and I am not sure if my understanding of the task is incorrect or if my approach is incorrect, or both. Here is what I think is correct:

original image from camera

According to the documentation for the CIDetector featuresInImage:options: method

A dictionary that specifies the orientation of the image. The detection is 
adjusted to account for the image orientation but the coordinates in the 
returned feature objects are based on those of the image.

image as displayed in UIImageView

In the code below I am trying to rotate a CGPoint in order to draw it through a CAShape layer which overlays a UIImageView.

What I am doing (...or think I am doing...) is translating the left eye CGPoint to the center of the view, rotating by 90 degrees, then translating the point back to where it was. This is not correct but I don't know where I am going wrong. Is it my approach wrong or the way I am implementing it?

#define DEGREES_TO_RADIANS(angle) ((angle) / 180.0 * M_PI)

-- leftEyePosition is a CGPoint

CGAffineTransform  transRot = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(90));

float x = self.center.x;
float y = self.center.y;
CGAffineTransform tCenter = CGAffineTransformMakeTranslation(-x, -y);
CGAffineTransform tOffset = CGAffineTransformMakeTranslation(x, y);

leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, tCenter);
leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, transRot);
leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, tOffset);

From this post: https://stackoverflow.com/a/14491293/840992, I need to make rotations based on the imageOrientation

Orientation

Apple/UIImage.imageOrientation Jpeg/File kCGImagePropertyOrientation

UIImageOrientationUp    = 0  =  Landscape left  = 1
UIImageOrientationDown  = 1  =  Landscape right = 3
UIImageOrientationLeft  = 2  =  Portrait  down  = 8
UIImageOrientationRight = 3  =  Portrait  up    = 6

Message was edited by skinnyTOD on 2/1/13 at 4:09 PM

2

There are 2 best solutions below

0
On

I need to figure out the exact same problem. Apple sample "SquareCam" operates directly on a video output, but I need the results from still UIImage. So I extended the CIFaceFeature class with some conversion methods to get the correct point locations and bounds with respect to the UIImage and its UIImageView (or the CALayer of a UIView). The complete implementation is posted here: https://gist.github.com/laoyang/5747004. You can use directly.

Here is the most basic conversion for a point from CIFaceFeature, the returned CGPoint is converted based on image's orientation:

- (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint {

    CGFloat imageWidth = image.size.width;
    CGFloat imageHeight = image.size.height;

    CGPoint convertedPoint;

    switch (image.imageOrientation) {
        case UIImageOrientationUp:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDown:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeft:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        case UIImageOrientationRight:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationUpMirrored:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDownMirrored:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeftMirrored:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationRightMirrored:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        default:
            break;
    }
    return convertedPoint;
}

And here are the category methods based on the above conversion:

// Get converted features with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image;
- (CGPoint) rightEyePositionForImage:(UIImage *)image;
- (CGPoint) mouthPositionForImage:(UIImage *)image;
- (CGRect) boundsForImage:(UIImage *)image;

// Get normalized features (0-1) with respect to the imageOrientation property
- (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedMouthPositionForImage:(UIImage *)image;
- (CGRect) normalizedBoundsForImage:(UIImage *)image;

// Get feature location inside of a given UIView size with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize;

(Another thing need to notice is specifying the correct EXIF orientation when extracting the face features from UIImage orientation. Quite confusing... here is what I did:

int exifOrientation;
switch (self.image.imageOrientation) {
    case UIImageOrientationUp:
        exifOrientation = 1;
        break;
    case UIImageOrientationDown:
        exifOrientation = 3;
        break;
    case UIImageOrientationLeft:
        exifOrientation = 8;
        break;
    case UIImageOrientationRight:
        exifOrientation = 6;
        break;
    case UIImageOrientationUpMirrored:
        exifOrientation = 2;
        break;
    case UIImageOrientationDownMirrored:
        exifOrientation = 4;
        break;
    case UIImageOrientationLeftMirrored:
        exifOrientation = 5;
        break;
    case UIImageOrientationRightMirrored:
        exifOrientation = 7;
        break;
    default:
        break;
}

NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];

NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage]
                                          options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];

)

2
On

I think what you need to flip the found faces coordinates about the horizontal center axis of the image

Can you try with this transform:

CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, 0.0f, image.size.height);
transform = CGAffineTransformScale(transform, 1.0f, -1.0f);
[path applyTransform:transform];

This transform works only if we set image.imageOrientation to 0 before finding faces.