CoreGraphics: Mass image partition

193 Views Asked by At

I have to part 50 square images (600px x 600px) into 9 equal-size (200x200) square parts. To get each part i use method:

-(UIImage *)getSubImageFrom:(UIImage *)img WithRect:(CGRect)rect
{
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();

// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, img.size.width, img.size.height);

// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));

// draw image
[img drawInRect:drawRect];

// grab image
UIImage* subImage = UIGraphicsGetImageFromCurrentImageContext();

CGContextRelease(context);

return subImage;
}

And it was enough fast for me...since today. I used to load images using [UIImage imageNamed:] method. This method do NOT autorelease memory until app gets memory warning... Which was unacceptable for me.

So i started using [UIImage imageWithContentsOfFile]. Problems with memory allocation disappeared... unfortunately my cropping method (-(UIImage *)getSubImageFrom:(UIImage *)img WithRect:(CGRect)rect) started to work 20 times slower! Its now several hours since I started searching for solution...without result. Hope you can help me.

Best regards!

PS. I've tried to use tips from this question CGContextDrawImage is EXTREMELY slow after large UIImage drawn into it, without any result.

1

There are 1 best solutions below

0
On

This is what we call a "tradeoff". You complain that +[UIImage imageNamed:] makes things faster but uses more memory, and that +[UIImage imageWithContentsOfFile:] is slow but doesn't use as much memory. The reason the latter is slow is that every time you call +[UIImage imageWithContentsOfFile:] it has to read and decode the image from "disk" and that takes time (a non-trivial amount of time, as you've learned). When you use +[UIImage imageNamed:] it is read and decoded once, and then the result of the read/decode operation is kept in memory until a memory warning comes in. This means that subsequent accesses of that image (before a memory warning) will be fast.

It sounds like a good approach would be for you to cache the images in a cache whose lifetime you can control. NSCache would be the obvious choice, since it is also hooked into the system memory hooks, but you can explicitly remove things sooner using -removeObjectForKey:

Another approach might be to create your sub-images once and write them to disk (look up NSCachesDirectory for a good place to put such things). This will eliminate the need to do this (relatively expensive) subsetting operation more than once, and the smaller images (assuming you don't need them all at once) might be faster to load from disk than the larger images.

To play devil's advocate for a minute: If UIImage does the right thing when a memory warning comes in, why do you care that it uses memory? Is this memory use actually causing your app to get killed? (I know from experience that memory warnings are not interlocked -- i.e. you don't necessarily have enough time to reduce your consumption before your process is killed -- but are you actually having that problem? Put differently, is there a chance that your efforts to reduce memory consumption might qualify as premature optimization?