Converting RGBA to ARGB (glReadPixels -> AVAssetWriter)

4.9k Views Asked by At

I want to record images, rendered with OpenGL, into a movie-file with the help of AVAssetWriter. The problem arises, that the only way to access pixels from an OpenGL framebuffer is by using glReadPixels, which only supports the RGBA-pixel format on iOS. But AVAssetWriter doesn't support this format. Here I can either use ARGB or BGRA. As the alpha-values can be ignored, I came to the conclusion, that the fastest way to convert RGBA to ARGB would be to give glReadPixels the buffer shifted by one byte:

UInt8 *buffer = malloc(width*height*4+1);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer+1);

The problem is, that the glReadPixels call leads to a EXC_BAD_ACCESS crash. If I don't shift the buffer by one byte, it works perfectly (but obviously with wrong colors in the video-file). What's the problem here?

5

There are 5 best solutions below

3
On BEST ANSWER

I came to the conclusion, that the fastest way to convert RGBA to ARGB would be to give glReadPixels the buffer shifted by one byte

This will however shift your alpha values by 1 pixel as well. Here's another suggestion:

Render the picture to a texture (using a FBO with that texture as color attachment). Next render that texture to another framebuffer, with a swizzling fragment shader:

#version ...
uniform sampler2D image;
uniform vec2 image_dim;
void main() 
{
    // we want to address texel centers by absolute fragment coordinates, this
    // requires a bit of work (OpenGL-ES SL doesn't provide texelFetch function).
    gl_FragColor.rgba = 
        texture2D(image, vec2( (2*gl_FragCoord.x + 1)/(2*image_dim.y), 
                               (2*gl_FragCoord.y + 1)/(2*image_dim.y) )
        ).argb; // this swizzles RGBA into ARGB order if read into a RGBA buffer
}
1
On

You will need to shift bytes by doing a memcpy or other copy operation. Modifying the pointers will leave them unaligned, which may or may not be within the capabilities of any underlying hardware (DMA bus widths, tile granularity, etc.)

8
On

What happens if you put an extra 128 bytes of slack on the end of your buffer? It might be that OpenGL is trying to fill 4/8/16/etc bytes at a time for performance, and has a bug when the buffer is non-aligned or something. It wouldn't be the first time a performance optimization in OpenGL had issues on an edge case :)

1
On

Using buffer+1 will mean the data is not written at the start of your malloc'd memory, but rather one byte in, so it will be writing over the end of your malloc'd memory, causing the crash.

If iOS's glReadPixels will only accept GL_RGBA then you'll have to go through and re-arrange them yourself I think.

UPDATE, sorry I missed the +1 in your malloc, StilesCrisis is probably right about the cause of the crash.

1
On

Try calling

glPixelStorei(GL_PACK_ALIGNMENT,1) 

before glReadPixels.

From the docs:

GL_PACK_ALIGNMENT

Specifies the alignment requirements for the start of each pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on double-word boundaries).

The default value is 4 (see glGet). This often gets mentioned as a troublemaker in various "OpenGL pitfalls" type lists, although this is generally more to do with its row padding effects than buffer alignment.

As an alternative approach, what happens if you malloc 4 extra bytes, do the glReadPixels as 4-byte aligned starting at buffer+4, and then pass your AVAssetWriter buffer+3 (although I've no idea whether AVAssetWriter is more tolerant of alignment issues) ?