I need to read the pixel data from the frame buffer in OpenGL ES 2.0. I know that can be done easily with glReadPixels but since iOS 5 we can use the TextureCached objects for faster reading.
I have implemented the solution proposed by Brad Larson ( I will be always thankful to him, I think he is doing a great job for the community sharing so much knowledge...) in Faster alternative to glReadPixels in iPhone OpenGL ES 2.0
Everything seems to work, I get the proper data and if I compare it with glReadPixels, the data is identical. My problem came when I measure the performance of this 2 possible solutions (consumed time while retrieving the data).
Here my results :
(framebuffer and texture size 320x480 pixels)
GPUImageProcessingDemo[1252:707] glReadPixels 2750 us
GPUImageProcessingDemo[1252:707] Texture reading 1276 us
GPUImageProcessingDemo[1252:707] glReadPixels 2443 us
GPUImageProcessingDemo[1252:707] Texture reading 1263 us
GPUImageProcessingDemo[1252:707] glReadPixels 2494 us
GPUImageProcessingDemo[1252:707] Texture reading 1375 us
Which seems very interesting since it is almost half of the time needed when using glReadPixels. The problem is when I change the texture size to something a little bit bigger I get this results:
(framebuffer and texture size 480x620 pixels)
GPUImageProcessingDemo[1077:707] glReadPixels 2407 us
GPUImageProcessingDemo[1077:707] Texture reading 2842 us
GPUImageProcessingDemo[1077:707] glReadPixels 2392 us
GPUImageProcessingDemo[1077:707] Texture reading 3040 us
GPUImageProcessingDemo[1077:707] glReadPixels 2224 us
Does this make sense? Or should I expect to get better results always?