I'm trying to get the text area on a image using core detector.
- (NSArray *)detectWithImage:(UIImage *)img
{
// prepare CIImage
CIImage *image = [CIImage imageWithCGImage:img.CGImage];
// flip vertically
CIFilter *filter = [CIFilter filterWithName:@"CIAffineTransform"];
[filter setValue:image forKey:kCIInputImageKey];
CGAffineTransform t = CGAffineTransformMakeTranslation(0, CGRectGetHeight(image.extent));
t = CGAffineTransformScale(t, 1.0, -1.0);
[filter setValue:[NSValue valueWithCGAffineTransform:t] forKey:kCIInputTransformKey];
image = filter.outputImage;
// prepare CIDetector
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeText
context:nil
options:@{
CIDetectorAccuracy: CIDetectorAccuracyHigh}];
// retrive array of CITextFeature
NSArray *features = [detector featuresInImage:image
options:@{CIDetectorReturnSubFeatures: @YES}];
return features;
}
The image passed is:
I get nothing from this image. I tried with color image as well and also without flipping the image.
Can someone point me in right direction ?
Thanks!

You should check to make sure the
UIImageandimg.CGImagebeing passed into your function are notnil, as the rest of your code seems to be fine, though the flip is not necessary. For example:Produced the result:
Where the red highlight represents the bounds returned from the
CIDetector