I have initialised the textlinedetector
as below
self.textDetector = [GMVDetector detectorOfType:GMVDetectorTypeText options:nil];
Since I want only the line and not the whole block, I'm directly accessing GMVTextLineFeature
and the input image is of the type UIImage
directly from the camera preview.
NSArray<GMVTextLineFeature *> *features = [self.textDetector featuresInImage:[_Result originalImage] options:nil];
But the above array is nil.
[myOperation setCompletionBlock:^{
for (GMVTextLineFeature *textLine in features) {
NSLog(@"value of each element: %@", textLine.value);
_Result.text = textLine.value;
}
[self finishDetection];
}];
[_operationQueue addOperation:myOperation];
My concern is that my project is in gradle and GoogleVision is built in cocoapods. So I manually copied the framework files into my project and linked it in frameworks and libraries
. I also linked the resource files for the frameworks, where it contains all the conv config files under copy resource bundles
.
Yet the feature
object is nil
. I have also clean built the project multiple times. Since I'm new to iOS, I'm unable to figure out whether it is theproblem with cocoapods to gradle or the way it is implemented. But this is how it is implemented in the demo app TextDetectorDemo. I'm using xcode 9.4.
Any insight or any workarounds will be much appreciated.
Thanks in advance.
Convert or recreate UIImage before passing to mobile vision with similar extension, the filter can be changed according to your need