I have a raw bitmap image of RGBA malloc-ed data; rows are obviously a multiple of 4 bytes. This data actually originates from an AVI (24-bit BGR format), but I convert it to 32-bit ARGB. There's about 8mb of 32-bit data (1920x1080) per frame.
For each frame:
- I convert that frame's data into a
NSData
object viaNSData:initWithBytes:length
. - I then convert that into a
CIImage
object viaCIImage:imageWithBitmapData:bytesPerRow:size:format:colorSpace
. - From that
CIImage
, I draw it into my finalNSOpenGLView
context usingNSOpenGLView:drawImage:inRect:fromRect
. Due to the "mosaic" nature of the target images, there are approximately 15-20 calls made on this with various source/destination Rects.
Using a 30hz NSTimer
that calls [self setNeedsDisplay:YES]
on the NSOpenGLView
, I can attain about 20-25fps on a 2012 MacMini/2.6ghz/i7 -- it's not rock solid at 30hz. This to be expected with an NSTimer
instead of a CVDisplayLink
.
But... ignoring the NSTimer
issue for now, are there any suggestions/pointers on making this frame-by-frame rendering a little more efficient?
Thanks!
NB: I would like to stick with CIImage
objects as I'll want to access transition effects at some point.
Every frame, the call to
NSData
'sinitWithBytes:length:
causes an 8MB memory allocation & an 8MB copy.You can get rid of this per-frame allocation/copy by replacing the
NSData
object with a persistentNSMutableData
object (set up once at the beginning), and using itsmutableBytes
as the destination buffer for the frame's 24- to 32-bit conversion.(Alternatively, if you prefer to manage the destination-buffer memory yourself, leave the object as
NSData
class, but initialize it withinitWithBytesNoCopy:length:freeWhenDone:
& passNO
as the last parameter.)