I'm woking on a scientific visualisation using openGL where I try to basically create a "coloured fog" to illustrate a multidimensional field. I want the hue of the color to represent the direction of the field and the luminosity of the color to correspond to the intensity of the field.
I use GL_BLEND
with glBlendFunc(GL_ONE, GL_ONE)
to achieve additive blending of colors of the surfaces I create. The problem is: in some places the colors get saturated. For example:
(1, 0, 0) + (1, 0, 0) = (2, 0, 0), and when this is rendered it just becomes (1, 0, 0) (i.e. the "overflow" is just chopped off). And this is not the way I would like it to be handled. I would like to handle the overflow by preserving hue and luminocity, i.e.
(2, 0, 0) should be translated into (1, 0.5, 0.5) (i.e. a lighter red, red with twice the luminocity of "pure" red).
Is there any way to achieve this (or something similar) with OpenGL?
The output to the fragment shader will be clamped to [0, 1] if the image format of the destination buffer has a normalized format (e.g. UNSIGNED_BYTE). If you use a floating point format, then the output is not clamped. See Blending and Image Format.
It is tricky to blend the target buffer and a color, by a function which is not supported by blending. A possible solution may be to withe a shader program and to use the extension EXT_shader_framebuffer_fetch.
Furthermore, the extension KHR_blend_equation_advanced adds a number of "advanced" blending equations, like
HSL_HUE_KHR
,HSL_SATURATION_KHR
,HSL_COLOR_KHR
andHSL_LUMINOSITY_KHR
.