Multitexturing used to be easy and straightforward. You bind your textures, you call glBegin
, and then you do your rendering, except instead of glTexCoord
you call glMultiTexCoord
for each texture. Then all of that got deprecated.
I'm looking around trying to figure out the Right Way to do it now, but all the tutorials I find, both from official Khronos Group sources and on blogs, all assume that you want to use the same set of texture coordinates for all of your textures, which is a highly simplistic assumption that does not hold true for my use case.
Let's say I have texture A and texture B, and I want to render the colors from texture B, in the rect rB
, using the alpha values in texture A, in the rect rA
, (which has the same height and width as rB
, for simplicity's sake, but not the same Left and Top values), using OpenGL 3, without any deprecated functionality. What would be the correct way to do this?
In the shaders you simply declare (and use) an extra set of texture coordinates and a second sampler.
When specifying the model you add the second set of texCoords to the attributes: