CIDetector featuresInImage returning 0

673 Views Asked by At

I'm trying to get the text area on a image using core detector.

- (NSArray *)detectWithImage:(UIImage *)img
{
    // prepare CIImage
    CIImage *image = [CIImage imageWithCGImage:img.CGImage];

    // flip vertically
    CIFilter *filter = [CIFilter filterWithName:@"CIAffineTransform"];
    [filter setValue:image forKey:kCIInputImageKey];
    CGAffineTransform t = CGAffineTransformMakeTranslation(0, CGRectGetHeight(image.extent));
    t = CGAffineTransformScale(t, 1.0, -1.0);
    [filter setValue:[NSValue valueWithCGAffineTransform:t] forKey:kCIInputTransformKey];
    image = filter.outputImage;


    // prepare CIDetector
    CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeText
                                              context:nil
                                              options:@{
                                                        CIDetectorAccuracy: CIDetectorAccuracyHigh}];
    // retrive array of CITextFeature
    NSArray *features = [detector featuresInImage:image
                                          options:@{CIDetectorReturnSubFeatures: @YES}];

    return features;
}

The image passed is:

enter image description here

I get nothing from this image. I tried with color image as well and also without flipping the image.

Can someone point me in right direction ?

Thanks!

1

There are 1 best solutions below

2
beyowulf On

You should check to make sure the UIImage and img.CGImage being passed into your function are not nil, as the rest of your code seems to be fine, though the flip is not necessary. For example:

UIImageView *imageView = [[UIImageView alloc] initWithImage: img];
CIImage *image = [CIImage imageWithCGImage:img.CGImage];

CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeText
                                              context:nil
                                              options:@{
                                                        CIDetectorAccuracy: CIDetectorAccuracyHigh}];
// retrive array of CITextFeature
NSArray *features = [detector featuresInImage:image options:@{CIDetectorReturnSubFeatures: @YES}];

for(CITextFeature *feature in features) {
    UIView *view = [[UIView alloc] initWithFrame: CGRectMake(feature.bounds.origin.x, image.size.height - fear.bounds.origin.y - feature.bounds.height, fear.bounds.width, feature.bounds.height)];
    view.backgroundColor = [[UIColor redColor] colorWithAlphaComponent: 0.25];
    [imageView addSubview: view];
}

Produced the result: enter image description here

Where the red highlight represents the bounds returned from the CIDetector