How to convert coordinates from one camera space to another camera space

87 Views Asked by At

I'm developing render engine in WebGL and JavaScript. No ThreeJS or other libraries used.

What I am trying to achieve:

  1. In a fragment shader, I have fragment XYZ coordinates in camera #1 space.
  2. I need to convert these coordinates into camera #2 space (eg. acting as a spotlight).
  3. By providing matrix to convert from #1 to #2 space, I expect to have fragment XYZ coordinates in camera #2 space

Problem is that matrix I calculate does not provide expected results. XYZ coordinates received do not match what is supposed to be. I'm unsure my matrix calculation is correct, but just in case here is all relevant code:

GLSL code

Method to read depth value based on screen coordinates:

float get_depth( in vec2 uv ) {
    // Returns depth read from pre-rendered depth image (deferred render pipeline)
    // for provided screen coordinates in range from 0.0 to 1.0
    // code for reading depth information from texture goes here...
    return depth;
}

Method that I use to obtain fragment coordinates in camera #1 space:

vec4 get_c1pos( in vec2 uv ) {
    float d = get_depth(uv); // 
    vec4 pos_clip = vec4(0.0, 0.0, 0.0, 1.0 );
    pos_clip.xy = uv.xy * 2.0 - 1.0;
    pos_clip.z = ( d * 2.0 ) - 1.0;
    vec4 pos_ws = uInvProjViewC1 * pos_clip;
    pos_ws.xyz /= pos_ws.w;
    return pos_ws;
}

Where uInvProjViewC1 is inverted camera #1 projection matrix.

Above methods I already use in processing SSAO with success - work as expected and I have normals-oriented SSAO.

Method to get XYZ coordinates of the same fragment, now in camera #2 space:

vec3 get_c2pos( in vec2 uv ) {
    vec4 pos = get_c1pos(uv);
    pos = uProjViewC2 * uC1toC2View * pos;
    pos.xy = ( pos.xy + 1.0 ) * 0.5;
    pos.xy = clamp( pos.xy, 0.0, 1.0 );
    return pos.xyz;
}

Where uProjViewC2 is projection matrix of camera #2, and uC1toC2View is matrix to transform from camera #1 to camera #2 space.

In Javascript, I calculate uC1toC2View with:

let c1_c2 = c2 * c1.invert(); // pseudo code for matrix calculation

However, result I get is this:

enter image description here

Expected result: Bottom left image is camera #2, which should be properly mapped on the ground in 3D view. Basically it's depth image from camera #2 view, to be used as a texture for the ground object, while current rendering shows camera #1 point of view. Camera #2 can move around, change FOV or other parameters.

Anyone experienced in how to properly transform fragment coordinate from one space to another? Thanks in advance!

0

There are 0 best solutions below