I'm trying to make an app to process a set of frames, stored as jpg
into the app using Google-vision API.
The pipeline is simple.
1) I create the detector with some options:
_options = @{
GMVDetectorFaceLandmarkType : @(GMVDetectorFaceLandmarkAll),
GMVDetectorFaceClassificationType : @(GMVDetectorFaceClassificationAll),
GMVDetectorFaceTrackingEnabled : @(NO)
};
_faceDetector = [GMVDetector detectorOfType:GMVDetectorTypeFace options:_options];
2) I read a frame with this method:
UIImage *image = [UIImage imageWithContentsOfFile:imFile];
The path contained in imFile is correct, I can see the Image representation
3) At last, I process the frame:
NSArray<GMVFaceFeature *> *faces = [_faceDetector featuresInImage:image options:nil];
With this code I can process some frames, but when analyzing a lot of them, the memory usage of the app keeps increasing and the app is killed automatically.
I've tried to track the memory leak, but as far as I tracked it, it comes from inside the last part, inside the [detector featuresInImage...]
Is there something I am doing wrong, or is there a memory leak inside it? I have tried to find any issue from google but couldn't manage to find it.
EDIT:
Here is what I do with each of the results of the detector:
if ([faces count]>0){
GMVFaceFeature *face = [faces objectAtIndex:0];
NSFileHandle *myHandle = [NSFileHandle fileHandleForWritingAtPath:filename];
[myHandle seekToEndOfFile];
NSString* lineToWrite = [NSString stringWithFormat:@"%u",fNumber];
lineToWrite = [lineToWrite stringByAppendingString:[NSString stringWithFormat:@",%f",face.smilingProbability]];
lineToWrite = [lineToWrite stringByAppendingString:@"\n"];
NSError *errorWrite;
[myHandle writeData:[lineToWrite dataUsingEncoding:NSUTF8StringEncoding]];
if(errorWrite){
NSLog(@"%@",errorWrite);
}
}
The method ends there. So basically what I do is creating a file and writing in it.