ARGB and kCGImageAlphaPremultipliedFirst format. Why do the pixel colors are stored as (255-data)?

845 Views Asked by At

I create an image using

UIGraphicsBeginImageContextWithOptions(image.size, NO, 0);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
// more code - not relevant - removed for debugging
image = UIGraphicsGetImageFromCurrentImageContext(); // the image is now ARGB
UIGraphicsEndImageContext();

Then I try to find the color of a pixel (using the code by Minas Petterson from here: Get Pixel color of UIImage). But since the image is now in ARGB format I had to modified the code with this:

    alpha = data[pixelInfo];
    red = data[(pixelInfo + 1)];
    green = data[pixelInfo + 2];
    blue = data[pixelInfo + 3];

However this did not work.

The problem is that (for example) a red pixel, that in RGBA would be represented as 1001 (actually 255 0 0 255, but for simplicity I use 0 to 1 values), in the image is represented as 0011 and not (as I thought) 1100. Any ideas why? Am I doing something wrong?

PS. The code I have to use looks like it has to be this:

alpha = 255-data[pixelInfo];
red = 255-data[(pixelInfo + 1)];
green = 255-data[pixelInfo + 2];
blue = 255-data[pixelInfo + 3];
1

There are 1 best solutions below

0
On

There are some problems that arises there:

"In some contexts, primarily OpenGL, the term "RGBA" actually means the colors are stored in memory such that R is at the lowest address, G after it, B after that, and A last. OpenGL describes the above format as "BGRA" on a little-endian machine and "ARGB" on a big-endian machine." (wiki)

Graphics hardware is backed by OpenGL on OS X/iOS, so I assume that we deal with little-endian data(intel/arm processors). So, when format is kCGImageAlphaPremultipliedFirst (ARGB) on little-endian machine it's BGRA. But don't worry, there is easy way to fix that.

Assuming that it's ARGB, kCGImageAlphaPremultipliedFirst, 8 bits per component, 4 components per pixel(That's what UIGraphicsGetImageFromCurrentImageContext() returns), don't_care-endiannes:

- (void)parsePixelValuesFromPixel:(const uint8_t *)pixel
                       intoBuffer:(out uint8_t[4])buffer {
    static NSInteger const kRedIndex = 0;
    static NSInteger const kGreenIndex = 1;
    static NSInteger const kBlueIndex = 2;
    static NSInteger const kAlphaIndex = 3;

    int32_t *wholePixel = (int32_t *)pixel;
    int32_t value = OSSwapHostToBigConstInt32(*wholePixel); 
    // Now we have value in big-endian format, regardless of our machine endiannes (ARGB now).

    buffer[kAlphaIndex] = value & 0xFF;
    buffer[kRedIndex] = (value >> 8) & 0xFF;
    buffer[kGreenIndex] = (value >> 16) & 0xFF;
    buffer[kBlueIndex] = (value >> 24) & 0xFF;
}