I have PNGs in Apple's iOS optimized BGRA PNG format (what I get using OptimizedPNG) and want to draw them in a way that tells CoreGraphics NOT to ignore the alpha component of the image. I'm drawing to a CGContextRef
in drawRect:
Edit: the rendered image shows black where it should be fully transparent (sometimes other random artifacts). The opaque areas are rendered normally.
CGImageAlphaInfo
I get from the image is kCGImageAlphaNoneSkipLast
, which seems to indicate there is a problem in the way the image is saved by OptimizedPNG. I think this should be kCGImageAlphaPremultipliedLast
.
Perhaps the PNG chunks are wrong, but I don't see anything wrong with IHDR, and there is very little I can find about the CgBI chunk.
This is how OptimizedPNG saves the color data:
// IDAT
int size = width*height*4;
unsigned char *buffer = malloc(size);
CGContextRef context = CGBitmapContextCreate(buffer, width, height, 8, width*4, CGImageGetColorSpace(originalImage.CGImage), kCGImageAlphaPremultipliedLast);
CGRect rect = CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height);
CGContextDrawImage(context, rect, originalImage.CGImage);
CGContextRelease(context);
int size_line = 1 + width*4;
int size_in = height*size_line;
unsigned char *buffer_in = malloc(size_in);
for(int y = 0; y < height; ++y){
unsigned char *src = &buffer[y*width*4];
unsigned char *dst = &buffer_in[y*size_line];
*dst++ = 1;
unsigned char r = 0, g = 0, b = 0, a = 0;
for(int x = 0; x < width; ++x){
dst[0] = src[2] - b;
dst[1] = src[1] - g;
dst[2] = src[0] - r;
dst[3] = src[3] - a;
r = src[0], g = src[1], b = src[2], a = src[3];
src += 4;
dst += 4;
}
}
From what you've told us there is no reason to use a transparency layer. Transparency layers are used to combine two or more objects to yield a composite graphic that is treated as a single object. This is useful if you want to apply an effect to the composite object rather than to each individual object. A very common case is to apply a shadow to the composite of several objects.
Just using
CGContextDrawImage()
will composite the image onto the graphics context considering the alpha channel. Exactly how the new image is composited over any content that is already in the graphics content depends on the blend mode set for the graphics content. You set the blend mode withCGContextSetBlendMode()
. A detailed description can be found in the Quartz 2D Programming Guide: Bitmaps Images and Masks. As you can see from the reference, there are many options for how to composite the image, but I might guess that you had in mindkCGBlendModeMultiply
orkCGBlendModeNormal
. Note that the default iskCGBlendModeNormal
, which just paints the source image samples over whatever is currently in the context respecting alpha values.