I am using the OpenGL ES 2.0 GLSL functionality to manipulate images. What I want to achieve is to select a specific Level Of Detail (LOD) when fetching a sample from a texture from within a fragment shader. Apparently texture2DLod()
is not supported for fragment shaders in OpenGL ES 2.0 (but there is an extension GL_EXT_shader_texture_lod
that can optionally provide it.)
However, the default texture2D()
function provides a third optional parameter bias. As far as I understand, this parameter is supposed to be an offset added to the current LOD.
In my tests I'm drawing a quad the size of the screen and I'm magnifying a texture by scaling the sample coordinates. I've enabled mipmapping for the texture using GL_LINEAR_MIPMAP_LINEAR as a minification filter. The result when sampling with a non-zero bias parameter on the ARM Mali-400 GPU can be seen in the image below.
ISTM that some pixels are using the minification filter while others are using the magnification filter.
How is this controlled? And how do I determine the initial LOD when using bias? Can it be adjusted from the vertex shader?
A related question is the following one:
How does a GLSL sampler determine the minification, and thus the mipmap level, of a texture?
Update: I noticed that if I sample the texture so that it covers the screen, then it seems to do the proper mipmap look-up (even though it is magnified.) Now, if I add a small offset to the sample coordinates, then I will begin to see the banding that is seen in the above image (which is magnified even more.) If I don't use the bias parameter I will not see any banding at all.