I've surveyed a number of order-independent transparency methods for my OpenGL engine, and at first I thought I would like to use weighted average blending, to maximize speed.
However, my engine uses deferred shading, and I need to take this into account when choosing a blending technique. Ideally I would like a technique that wouldn't ask me to implement forward shading to use for translucent objects.
There are a number of cases where I need to use transparency:
- Grass/Hair (anti-aliased cutouts)
- Glass (colorful blending)
- Objects that fade in and out
- Smoke/Clouds
- Water/Liquid (would involve refraction, I know that true OIT is impossible here)
- Sparks/Magic/Fire (don't need to be lit and can use additive blending, not worried about these)
I am willing to sacrifice image correctness for the sake of speed (Hence my initial choice of weighted average blending). I don't need every layer of translucent objects to be lit, but I would at least like for the front-most pixels to be properly lit.
I'm using OpenGL 3.x+ Core Context, so I would like to avoid anything that requires OpenGL 4.x (as lovely as it would be to use), but I can freely use anything that isn't available in OpenGL 2.x.
My question is: What is the best order-independent transparency technique for deferred shading?, and/or: what is the best way to light/shade a translucent object when using deferred shading?
P.S. is there a better way to render anti-aliased cutouts (grass/hair/leaves) that doesn't rely on blending? Pure alpha testing tends to produce ugly aliasing.

I'm not sure it fits your deferred renderer but you might consider weighted, blended order-independent transparency. There's an older version w/o colored transmisson (web) and a newer version that supports colored transmission (web) and a lot of other stuff. It is quite fast because it only uses one opaque, one transparency, and one composition pass and it works with OpenGL 3.2+.
I implemented the first version and it works quite well, depending on your scene and a properly tuned weighting function, but has problems with high alpha-values. I didn't get good results with the weighting functions from the papers, but only after using linear, normalized eye-space z-values.
Note that when using OpenGL < 4.0 you can not specify a blending function per buffer (glBlendFunci), so you need to work around that (see the first paper).
Render opaque geometry to attachment #0 and depth buffer.
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
Render transparent geometry to attachment #1 and #2. Turn off depth buffer writes, but leave depth testing enabled.
glDepthMask(GL_FALSE);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE, GL_ONE, GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
The fragment shader part writing the accumulation and revealage targets looks like this:
Bind attachment textures #1 and #2 and composite them to attachment #0 by drawing a quad with a composition shader.
glEnable(GL_BLEND);
glBlendFunc(GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA);
The fragment shader for composition looks like this: