Problem
My main goal is to get model coordinates for touches on the device, so to check what you've touched. Im working with a large model and have to draw many things, that also have to be touchable.
To achieve this I know two possible ways: On one hand we could do a ray casting and intersect the camera's pointing vector with the model, that we would have to store in memory some where. On the other hand, and thats what I'm trying to do, we could do it with the old fashion:
function gluUnProject(winx, winy, winz: TGLdouble;
const modelMatrix: TGLMatrixd4;
const projMatrix: TGLMatrixd4;
const viewport: TGLVectori4;
objx, objy, objz
and transform the screen coordinates back to model coordinates. Am I correct until here? Do you know other methods for the touch handling in opengl apps? As you can see the function takes winz as a parameter, this would be the height of the fragment at the screen coordinate and this information usually comes from the depth buffer. I'm already aware that opengl-es 2.0 doesn't provide access to it's internally used depth buffer like it would be possible in "normal" opengl. So how may I get this information?
Apple offers two possibilities. Either create a offscreen frame buffer with a depth attachment, or you render the depth information into a texture. Sadly for me the manual doesn't show a way to read back information into iOS. I think I have to use glReadPixels and read back from them. I implemented everything I could find but but no matter how I set it up, I don't get the right result for the height back from the offscreen frame buffer or the texture. I'm expecting to get a GL_FLOAT with the z value.
z:28550323
r:72 g:235 b:191 [3]:1 <-- always this
Code
gluUnProject
As we all now, the glu library isn't available in iOS, so I looked up the code and implemented the following method, based on this source: link. The GLKVector2 screen input variable are the X,Y coordinates on the screen, read by the UITabGestureRecognizer
-(GLKVector4)unprojectScreenPoint:(GLKVector2)screen {
//get active viewport
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
NSLog(@"viewport [0]:%d [1]:%d [2]:%d [3]:%d", viewport[0], viewport[1], viewport[2], viewport[3]);
//get matrices
GLKMatrix4 projectionModelViewMatrix = GLKMatrix4Multiply(_modelViewMatrix, _projectionMatrix);
projectionModelViewMatrix = GLKMatrix4Invert(projectionModelViewMatrix, NULL);
//in ios Y is inverse
screen.v[1] = viewport[3]-screen.v[1];
NSLog(@"screen: [0]:%.2f [1]:%.2f", screen.v[0], screen.v[1]);
//read from the depth component of the last rendererd offscreen framebuffer
/*
GLubyte z;
glBindFramebuffer(GL_FRAMEBUFFER, _depthFramebuffer);
glReadPixels(screen.v[0], screen.v[1], 1, 1, GL_DEPTH_COMPONENT16, GL_UNSIGNED_BYTE, &z);
NSLog(@"z:%c", z);
*/
//read from the last rendererd depth texture
Byte rgb[4];
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glReadPixels(screen.v[0], screen.v[1], 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, &z);
glBindTexture(GL_TEXTURE_2D, 0);
NSLog(@"r:%d g:%d b:%d [3]:%d", rgb[0], rgb[1], rgb[2], rgb[3]);
GLKVector4 in = GLKVector4Make(screen.v[0], screen.v[1], 1, 1.0);
/* Map x and y from window coordinates */
in.v[0] = (in.v[0] - viewport[0]) / viewport[2];
in.v[1] = (in.v[1] - viewport[1]) / viewport[3];
/* Map to range -1 to 1 */
in.v[0] = in.v[0] * 2.0 - 1.0;
in.v[1] = in.v[1] * 2.0 - 1.0;
in.v[2] = in.v[2] * 2.0 - 1.0;
GLKVector4 out = GLKMatrix4MultiplyVector4(projectionModelViewMatrix, in);
if(out.v[3]==0.0) {
NSLog(@"out.v[3]==0.0");
return GLKVector4Make(0.0, 0.0, 0.0, 0.0);
}
out.v[0] /= out.v[3];
out.v[1] /= out.v[3];
out.v[2] /= out.v[3];
return out;
}
It tries to read the data from the depth buffer or the depth texture, which both are generated while drawing. I know that this code is very inefficient, but first it has to run before I then clean up.
I tried drawing to only the additional frame buffer (commented out here), only to the tile and together, no success.
draw
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glUseProgram(_program);
//http://stackoverflow.com/questions/10761902/ios-glkit-and-back-to-default-framebuffer
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
GLint defaultRBO;
glGetIntegerv(GL_RENDERBUFFER_BINDING_OES, &defaultRBO);
//GLint defaultDepthRenderBuffer;
//glGetIntegerv(GL_Depth_B, &defaultRBO);
GLuint width, height;
//width = height = 512;
width = self.view.frame.size.width;
height = self.view.frame.size.height;
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
/* method offscreen framebuffer
GLuint depthRenderbuffer;
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
*/
//method render to texture
glActiveTexture(GL_TEXTURE1);
//https://github.com/rmaz/Shadow-Mapping
//http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html
GLuint depthTexture;
glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// we do not want to wrap, this will cause incorrect shadows to be rendered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// set up the depth compare function to check the shadow depth
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_EXT, GL_LEQUAL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_EXT, GL_COMPARE_REF_TO_TEXTURE_EXT);
//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8_OES, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@"failed to make complete framebuffer object %x", status);
}
GLenum glError = glGetError();
if(GL_NO_ERROR != glError) {
NSLog(@"Offscreen OpenGL Error: %d", glError);
}
glClear(GL_DEPTH_BUFFER_BIT);
//glCullFace(GL_FRONT);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glUniform1i(_uniforms.renderMode, 1);
//
//Drawing calls
//
_depthTexture = depthTexture;
//_depthFramebuffer = depthRenderbuffer;
// Revert to the default framebuffer for now
glBindFramebuffer(GL_FRAMEBUFFER, defaultFBO);
glBindRenderbuffer(GL_RENDERBUFFER, defaultRBO);
// Render normally
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glClearColor(0.316f, 0.50f, 0.86f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//glCullFace(GL_BACK);
glUniform1i(_uniforms.renderMode, 0);
[self update];
//
//Drawing calls
//
}
}
The value for z could be from a different type. Is it just a float that has to be put back in a float datatype?
Thanks for your support! patte
1. Edit
I now get the RGBA from the texture I rendered too. For this to happen I activate the separate frame buffer while drawing, but not the depth extension, and connect the texture to it. I edited the code above. Now I'm getting the following values:
screen: [0]:604.00 [1]:348.00
r:102 g:102 b:102 [3]:255
screen: [0]:330.00 [1]:566.00
r:73 g:48 b:32 [3]:255
screen: [0]:330.00 [1]:156.00
r:182 g:182 b:182 [3]:255
screen: [0]:266.00 [1]:790.00
r:80 g:127 b:219 [3]:255
screen: [0]:548.00 [1]:748.00
r:80 g:127 b:219 [3]:255
As you can see, the rgba values are read. The good news is that when I'm touching the sky, where no model is anymore, the value is always the same, and while touching the model, it varies. So I think the texture should be correct. But how would I now reassemble the real value out of this 4 Bytes, which then can be passed to gluUnProject? I can't just cast it to a float.