I am trying to figure out how to transform the CGPoint
results returned from CIFaceFeature
in order to draw with them in a CALayer
. Previously I had normalized my image to have 0 rotation in order to make things easier but that causes problems for images taken with the device held in landscape mode.
I've been working at this for a while without success and I am not sure if my understanding of the task is incorrect or if my approach is incorrect, or both. Here is what I think is correct:
According to the documentation for the CIDetector
featuresInImage:options:
method
A dictionary that specifies the orientation of the image. The detection is
adjusted to account for the image orientation but the coordinates in the
returned feature objects are based on those of the image.
In the code below I am trying to rotate a CGPoint in order to draw it through a CAShape layer which overlays a UIImageView.
What I am doing (...or think I am doing...) is translating the left eye CGPoint to the center of the view, rotating by 90 degrees, then translating the point back to where it was. This is not correct but I don't know where I am going wrong. Is it my approach wrong or the way I am implementing it?
#define DEGREES_TO_RADIANS(angle) ((angle) / 180.0 * M_PI)
-- leftEyePosition is a CGPoint
CGAffineTransform transRot = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(90));
float x = self.center.x;
float y = self.center.y;
CGAffineTransform tCenter = CGAffineTransformMakeTranslation(-x, -y);
CGAffineTransform tOffset = CGAffineTransformMakeTranslation(x, y);
leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, tCenter);
leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, transRot);
leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, tOffset);
From this post: https://stackoverflow.com/a/14491293/840992, I need to make rotations based on the imageOrientation
Orientation
Apple/UIImage.imageOrientation Jpeg/File kCGImagePropertyOrientation
UIImageOrientationUp = 0 = Landscape left = 1 UIImageOrientationDown = 1 = Landscape right = 3 UIImageOrientationLeft = 2 = Portrait down = 8 UIImageOrientationRight = 3 = Portrait up = 6
Message was edited by skinnyTOD on 2/1/13 at 4:09 PM
I need to figure out the exact same problem. Apple sample "SquareCam" operates directly on a video output, but I need the results from still UIImage. So I extended the CIFaceFeature class with some conversion methods to get the correct point locations and bounds with respect to the UIImage and its UIImageView (or the CALayer of a UIView). The complete implementation is posted here: https://gist.github.com/laoyang/5747004. You can use directly.
Here is the most basic conversion for a point from CIFaceFeature, the returned CGPoint is converted based on image's orientation:
And here are the category methods based on the above conversion:
(Another thing need to notice is specifying the correct EXIF orientation when extracting the face features from UIImage orientation. Quite confusing... here is what I did:
)