I have a generative art application which starts with a small set of points, grows them outwards, and checks the growth to make sure it doesn't intersect with anything. My first naive implementation was to do it all on the main UI thread with the expected consequences. As the size grows, there are more points to check, it slows down and eventually blocks the UI.
I did the obvious thing and moved the calculations to another thread so the UI could stay responsive. This helped, but only a little. I accomplished this by having an NSBitmapImageRep
that I wrap an NSGraphicsContext
around so I can draw into it. But I needed to ensure that I'm not trying to draw it to the screen on the main UI thread while I'm also drawing to it on the background thread. So I introduced a lock. The drawing can take a long time as the data gets larger, too, so even this was problematic.
My latest revision has 2 NSBitmapImageRep
s. One holds the most recently drawn version and is drawn to the screen whenever the view needs updating. The other is drawn to on the background thread. When the drawing on the background thread is done, it's copied to the other one. I do the copy by getting the base address of each and simply calling memcpy()
to actually move the pixels from one to the other. (I tried swapping them rather than copying, but even though the drawing ends with a call to [-NSGraphicsContext flushContext]
, I was getting partially-drawn results drawn to the window.)
The calculation thread looks like this:
BOOL done = NO;
while (!done)
{
self->model->lockBranches();
self->model->iterate();
done = (!self->model->moreToDivide()) || (!self->keepIterating);
self->model->unlockBranches();
[self drawIntoOffscreen];
dispatch_async(dispatch_get_main_queue(), ^{
self.needsDisplay = YES;
});
}
This works well enough for keeping the UI responsive. However, every time I copy the drawn image into the blitting image, I call [-NSBitmapImageRep baseAddress]
. Looking at a memory profile in instruments, each call to that function causes a CGImage
to be created. Furthermore, that CGImage
isn't released until the calculations finish, which can be several minutes. This causes memory to grow pretty large. I'm seeing around 3-4 Gigs of CGImages in my process, even though I never need more than 2 of them. After the calculations finish and the cache is emptied, my app's memory goes down to only 350-500 MB. I hadn't thought to use an autorelease pool in the calculation loop for this, but will give it a try.
It appears that the OS is caching the images it creates. However, it doesn't clear out the cache until the calculations are finished, so it grows without bound until then. Is there any way to keep this from happening?
Don't use
-bitmapData
andmemcpy()
to copy the image. Draw the one image into the other.I often recommend that developers read the section "NSBitmapImageRep: CoreGraphics impedance matching and performance notes" from the 10.6 AppKit release notes:
In fact, it's a good idea to start at the earlier section with a similar title — "NSImage, CGImage, and CoreGraphics impedance matching" — and read through to the later section.
By the way, there's a good chance that swapping the image reps would work, but you just weren't synchronizing them properly. You would have to show the code where both reps were used for us to know for sure.