I use gl-matrix JS library to transform points in 3d. When I use model-view matrix (lookAt
), the transformed points make sense.
But when I add perspective projection on top of that, the resulting transformed points do not make much sense (the values of the coordinates have ridiculously small values). What do I do wrong?
// four points on a square
const points = [
vec3.fromValues( 0, 0, 0),
vec3.fromValues(200, 0, 0),
vec3.fromValues( 0, 200, 0),
vec3.fromValues(200, 200, 0),
]
points.forEach(function(p, i) {
console.log("point", i, p)
})
// look from the center above the points towards the center between the points
const viewer = vec3.fromValues(100, 100, 200)
const lookat = vec3.fromValues(100, 100, 0)
const mv = mat4.create()
mat4.lookAt(mv, viewer, lookat, vec3.fromValues(0, 1, 0))
const transformed_points = [
vec3.create(),
vec3.create(),
vec3.create(),
vec3.create(),
]
points.forEach(function(p, i) {
vec3.transformMat4(transformed_points[i], p, mv)
})
// the points transformed through model-view matrix make sense, the center is (0,0)
// and the four points are +/- 100 around the center
transformed_points.forEach(function(p, i) {
console.log("point mv", i, p)
})
const p = mat4.create()
const mvp = mat4.create()
// then adding perspective to the model-view-perspective matris, the values
// of the transformed points are riducuously small, like +/-0.5
mat4.perspective(p, Math.PI/2, 1 /* aspect */, 100 /* near */, 1000 /* far */)
mat4.multiply(mvp, p, mv)
points.forEach(function(p, i) {
vec3.transformMat4(transformed_points[i], p, mvp)
})
transformed_points.forEach(function(p, i) {
console.log("point mvp", i, p)
})
The result is as follows:
// those are the original points on a square
point 0 [ 0, 0, 0 ]
point 1 [ 200, 0, 0 ]
point 2 [ 0, 200, 0 ]
point 3 [ 200, 200, 0 ]
// the original points transformed with MV matrix, values make sense
// because the "world" is now centered at the observers eye
point mv 0 [ -100, -100, -200 ]
point mv 1 [ 100, -100, -200 ]
point mv 2 [ -100, 100, -200 ]
point mv 3 [ 100, 100, -200 ]
// when I add the perspective, the transformed points make no sense.
point mvp 0 [ -0.5, -0.5, 0.11111106723546982 ]
point mvp 1 [ 0.5, -0.5, 0.11111106723546982 ]
point mvp 2 [ -0.5, 0.5, 0.11111106723546982 ]
point mvp 3 [ 0.5, 0.5, 0.11111106723546982 ]
What I expected is that the points after applying the perspective transformation will still be in some reasonable ~100 range, but not all collapsed in the center of the view around +/- 0.5.
The answer is that applying perspective projection moves the coordinates to the Clip Space:
https://learnopengl.com/Getting-started/Coordinate-Systems
FTR, I fed this question to ChatGPT and I got answers that looked like valid but were only misleading.