Oculus Rift + Point Sprites + Point size attenuation

336 Views Asked by At

I am coding a small project with Oculus Rfit support, and i use point sprites to render my particles. I calculate the size of the point sprites in pixels, based on their distance from the "camera" in the vertex shader. When drawing on the default screen (not on the Rift) the size works perfectly, but when i switch to the Rift i notice these phenomena:

The particles on the Left Eye are small and get reduced in size very rapidly. The particles on the Right Eye are huge and do not change in size.

Screenshots: Rift disabled: https://i.stack.imgur.com/03l3o.jpg Rift enabled: https://i.stack.imgur.com/4tswC.jpg

Here is the vertex shader:

#version 120

attribute vec3 attr_pos;
attribute vec4 attr_col;
attribute float attr_size;

uniform mat4 st_view_matrix;
uniform mat4 st_proj_matrix;
uniform vec2 st_screen_size;

varying vec4 color;

void main()
{
    vec4 local_pos = vec4(attr_pos, 1.0);
    vec4 eye_pos = st_view_matrix * local_pos;
    vec4 proj_vector = st_proj_matrix * vec4(attr_size, 0.0, eye_pos.z, eye_pos.w);
    float proj_size = st_screen_size.x * proj_vector.x / proj_vector.w;

    gl_PointSize = proj_size;
    gl_Position = st_proj_matrix * eye_pos;

    color = attr_col;
}

The st_screen_size uniform is the size of the viewport. Since i am using a single frambuffer when rendering on the Rift (1 half for each eye), the value of st_screen_size should be (frabuffer_width / 2.0, frambuffer_height).

Here is my draw call:

    /*Drawing starts with a call to ovrHmd_BeginFrame.*/
    ovrHmd_BeginFrame(game::engine::ovr_data.hmd, 0);

    /*Start drawing onto our texture render target.*/
    game::engine::ovr_rtarg.bind();
    glClearColor(0, 0, 0, 1);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

   //Update the particles.
    game::engine::nuc_manager->update(dt, get_msec());

    /*for each eye... */
    for(unsigned int i = 0 ; i < 2 ; i++){
        ovrEyeType eye = game::engine::ovr_data.hmd->EyeRenderOrder[i];
        /* -- Viewport Transformation --
         * Setup the viewport to draw in the left half of the framebuffer when we're
         * rendering the left eye's view (0, 0, width / 2.0, height), and in the right half
         * of the frambuffer for the right eye's view (width / 2.0, 0, width / 2.0, height)
         */
        int fb_width = game::engine::ovr_rtarg.get_fb_width();
        int fb_height = game::engine::ovr_rtarg.get_fb_height();

        glViewport(eye == ovrEye_Left ? 0 : fb_width / 2, 0, fb_width / 2, fb_height);

      //Send the Viewport size to the shader.
      set_unistate("st_screen_size", Vector2(fb_width /2.0 , fb_height));

        /* -- Projection Transformation --
         * We'll just have to use the projection matrix supplied but he oculus SDK for this eye.
         * Note that libovr matrices are the transpose of what OpenGL expects, so we have to
         * send the transposed ovr projection matrix to the shader.*/
        proj = ovrMatrix4f_Projection(game::engine::ovr_data.hmd->DefaultEyeFov[eye], 0.01, 40000.0, true);

      Matrix4x4 proj_mat;
      memcpy(proj_mat[0], proj.M, 16 * sizeof(float));

      //Send the Projection matrix to the shader.
      set_projection_matrix(proj_mat);

        /* --view/camera tranformation --
         * We need to construct a view matrix by combining all the information provided by
         * the oculus SDK, about the position and orientation of the user's head in the world.
         */
         pose[eye] = ovrHmd_GetHmdPosePerEye(game::engine::ovr_data.hmd, eye);

         camera->reset_identity();

         camera->translate(Vector3(game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.x,
          game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.y,
          game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.z));

         /*Construct a quaternion from the data of the oculus SDK and rotate the view matrix*/
         Quaternion q = Quaternion(pose[eye].Orientation.w, pose[eye].Orientation.x,
                                   pose[eye].Orientation.y, pose[eye].Orientation.z);
         camera->rotate(q.inverse().normalized());


         /*Translate the view matrix with the positional tracking*/
         camera->translate(Vector3(-pose[eye].Position.x, -pose[eye].Position.y, -pose[eye].Position.z));

       camera->rotate(Vector3(0, 1, 0), DEG_TO_RAD(theta));

       //Send the View matrix to the shader.
       set_view_matrix(*camera);



         game::engine::active_stage->render(STAGE_RENDER_SKY | STAGE_RENDER_SCENES | STAGE_RENDER_GUNS |
          STAGE_RENDER_ENEMIES | STAGE_RENDER_PROJECTILES, get_msec());
         game::engine::nuc_manager->render(RENDER_PSYS, get_msec());
       game::engine::active_stage->render(STAGE_RENDER_COCKPIT, get_msec());
    }

    /* After drawing both eyes into the texture render target, revert to drawing directly to the display,
     * and we call ovrHmd_EndFrame, to let the Oculus SDK draw both images properly, compensated for lens
     * distortion and chromatic abberation onto the HMD screen.
     */
    game::engine::ovr_rtarg.unbind();

    ovrHmd_EndFrame(game::engine::ovr_data.hmd, pose, &game::engine::ovr_data.fb_ovr_tex[0].Texture);

This problem has troubled me for many days now...and i feel like i have reached a dead end. I could just use billboarded quads.....but i don't want to give up that easily :) Plus point sprites are faster. Do the math behind Point size attenuation based on distance change when rendering on the Rift? Am a not taking something into account? Math is not (,yet at least) my strongest point. :) Any insight will be greatly appreciated!

PS: If any additional information is required about the code i posted, i will gladly provide it.

2

There are 2 best solutions below

1
On BEST ANSWER

vec4 local_pos = vec4(attr_pos, 1.0);
vec4 eye_pos = st_view_matrix * local_pos;
vec4 proj_voxel = st_proj_matrix * vec4(attr_size, 0.0, eye_pos.z, eye_pos.w);
float proj_size = st_screen_size.x * proj_voxel.x / proj_voxel.w;

gl_PointSize = proj_size;

Basically you are first transforming your point to view space to figure out it's Z coordinate in view space (distance from the viewer) and then you're constructing a vector aligned with the X axis with the desired particle size, and projecting that to see how many pixels it covers when projected and viewport-transformed (sortof).

This is perfectly reasonable, assuming your projection matrix is symmetrical. This assumption is wrong when dealing with the rift. I've drawn a diagram to illustrate the problem better:

https://i.stack.imgur.com/aLKkx.jpg

As you can see, when the frustum is assymetrical, which is certainly the case with the rift, using the distance of the projected point from the center of the screen will give you wildly different values for each eye, and certainly different from the "correct" projection size you're looking for.

What you must do instead, is project two points, say (0, 0, z, 1) AND (attr_size, 0, z, 1), using the same method, and compute their difference in screen space (after projection, perspective divide, and viewport).

0
On

I can recommend a couple of troubleshooting techniques.

First off, modify your code to automatically write a screenshot of the very first frame rendered (or if that's not convenient, just have a static boolean that causes the main draw to skip everything but the begin/end frame calls after the first run through. The SDK can sometimes mess up the OpenGL state machine, and if that's happening then what you're seeing is probably a result of the work done in ovrHmd_EndFrame() screwing up your rendering on subsequent passes through the rendering loop. Something else in your rendering code (subsequent to the particle rendering) may be inadvertently restoring the desired state, which is why the second eye rendered looks fine.

Second, I would try breaking the rendered eyes up into two framebuffers. Perhaps there's something in your code that is unexpectedly doing something to the framebuffer as a whole (like clearing the depth buffer) that is causing the difference. You could be running through your second eye with a different state for the framebuffer than you expect based on your top level code. Breaking up into two framebuffers will tell you if that's the case.

Another test you might run, similar to the second one is to refactor your rendering code to allow you to pass through this loop using the default framebuffer, and without the Oculus SDK calls. This is another technique that will help you determine if the issue is something in the SDK or in your own rendering code. Just render the two eye views to the two halves of the screen rather than the two halves of offscreen framebuffers.