In OpenGL your viewpoint is always at [0,0,0]. Say you have a vertex at this point as a part of a cube or some other object. Is that vertex in front of or behind the camera/viewpoint? After projection I always end up with w=1 when z==0, which also (as expected) happens to vertices with z==-1. So practically vertices with z=0 and z=-1 ends up at equal distance after projection.
Look how vec(2,2,0) and vec(2,2,-1) ends up with same screen coordinates here: https://jsfiddle.net/sf4dspng/1/
Result:
vec1: x=2.0000, y=2.0000, z= 0.0000, w=1
proj1: x=0.9474, y=2.0000, z=-0.2002, w=1
norm1: x=0.9474, y=2.0000, z=-0.2002, w=1
view1: x=1.0000, y=0.0000, z= 0.3999, w=1
vec2: x=2.0000, y=2.0000, z=-1.0000, w=1
proj2: x=0.9474, y=2.0000, z= 0.8018, w=1
norm2: x=0.9474, y=2.0000, z= 0.8018, w=1
view2: x=1.0000, y=0.0000, z= 0.9009, w=1
Why is that?
The coordinate system after all of your transformations are applied and the implicit perspective divide occurs after your vertex shader has been ran is called normalized device coordinates. In this space, the visible range of coordinates is from
(-1,-1,-1)to(1,1,1); everything else gets clipped.(0,0,0)would be in the center of the screen, and halfway into the scene.In clip coordinates, the coordinate system of values returned by your vertex shader, the visible range of XYZ values is
(-w,-w,-w)to(w,w,w), since NDC is computed as clip coordinate's XYZ values divided by the W value.Beyond that, it's up to what your vertex shader implements. Things like "object" and "view" space are commonly implemented by vertex shaders, but OpenGL is not actually aware of their existence. Visible values for those spaces are dependent on how you implement it in the vertex shader, and since the space transitions are usually defined using matrices, the visible coordinates depend on the matrix you use.