I'm learning OpenGL and I'm a bit confused about setting vertex data position.
For instance, I want to draw a rectangle size 300mm x 300mm, as I understand, I can assume 1 OpenGL unit = 1 mm and then I set the vertex data like this:
data = [-0.5, 0.5, # top left
-0.5, -0.5, # bottom left
0.5, 0.5, # top right
0.5, -0.5] # bottom right
So the rectangle size is 1 OpenGL unit or 1mm( if I'm not wrong), and then scale it up by 300 using model matrix.
Or I could set it like this:
data = [ 0.0, 300.0, # top left
0.0, 0.0, # bottom left
300.0, 300.0, # top right
300.0, 0.0] # bottom right
the rectangle size will be 300 OpenGL unit or 300mm.
I don't know which approach is the correct one.
Could you guy please point me to the right direction.
Thanks
OpenGL coordinates have no "units".
Whichever fragments got rasterized and ended up, in Normalized Device Coordinates (NDC), in the 2X2X2 box (or 2X2X1), centred around the origin, are fragments that can be mapped to window coordinates and will be written to the framebuffer (assuming they passed the depth test). What you do to get fragments there is up to you.
From your question I understand you are talking about orthographic projection, in which case check your orthographic projection matrix, it defines the coordinates of the "screen-edges". Therefore the actual unit of length of your vertices' coordinates is a function of your projection and physical screen size.
I strongly suggest reading about the simple way transformations from vertex coordinates to screen position happens. E.g. http://www.songho.ca/opengl/gl_transform.html, slightly outdates but covers the idea very well.
Also check http://www.songho.ca/opengl/gl_projectionmatrix.html, specifically the section about orthographic projection. It should give you an idea how to construct an orthographic-projection matrix with bottom, left, right and top coordinates of your choice.