I've been working SSAO in OpenGL. I decided to implement SSAO from this tutorial in OpenGL for my deferred renderer. Unfortunately I've been unable to get it working well. The areas that are darkened by SSAO change greatly depending on the camera's position. I understand there might be some variation in the output of SSAO when the camera moves, but this is much greater than I have observed in other implementations of SSAO.
Here is the fragment shader code
void main() {
vec3 origin = positionFromDepth(texture2D(gDepth, samplePosition));
vec3 normal = texture2D(gNormal, samplePosition).xyz; //multiplying this
//by 2 and subtracting 1 doesn't seem to help
vec2 random = getRandom(samplePosition);
float radius = uRadius/origin.z;
float occlusion = 0.0;
int iterations = samples/4;
for (int i = 0; i<iterations; i++) {
vec2 coord1 = reflect(kernel[i], random)*radius;
vec2 coord2 = vec2(coord1.x*0.707 - coord1.y*0.707, coord1.x*0.707 + coord1.y*0.707);
occlusion += occlude(samplePosition, coord1 * 0.25, origin, normal);
occlusion += occlude(samplePosition, coord2 * 0.50, origin, normal);
occlusion += occlude(samplePosition, coord1 * 0.75, origin, normal);
occlusion += occlude(samplePosition, coord2, origin, normal);
}
color = vec4(origin, 1);
}
The positionFromDepth()
function:
vec3 positionFromDepth(float depth) {
float near = frustrumData.x;
float far = frustrumData.y;
float right = frustrumData.z;
float top = frustrumData.w;
vec2 ndc;
vec3 eye;
eye.z = near * far / ((depth * (far - near)) - far);
ndc.x = ((gl_FragCoord.x/buffersize.x) - 0.5) * 2.0;
ndc.y = ((gl_FragCoord.y/buffersize.y) - 0.5) * 2.0;
eye.x = (-ndc.x * eye.z) * right/near;
eye.y = (-ndc.y * eye.z) * top/near;
return eye;
}
And the occlude()
function:
float occlude(vec2 uv, vec2 offsetUV, vec3 origin, vec3 normal) {
vec3 diff = positionFromDepth(texture2D(gDepth,(uv+offsetUV)))-origin;
vec3 vec = normalize(diff);
float dist = length(diff)/scale;
return max(0.0,dot(normal,vec)-bias)*(1.0/(1.0+dist))*intensity;
}
I have a feeling the problem could be in the positionFromDepth()
function, except that I use the same code for the lighting stage of the renderer which works perfectly (I think). I've been over this code a thousand times and haven't found anything that stands out as wrong. I've tried a variety of values for bias
, radius
, intenisty
, and scale
, but that doesn't seem to be the problem. I am worried either my normals or positions are wrong, so here are some screen shots of them:
The reconstructed position:
And the normal buffer:
I would include an image of the occlusion buffer, but the problem is mostly only obvious when the camera is moving, which an image wouldn't help to show.
Does anyone have any idea what's wrong here?
It is strange that multiplying by 2 and subtracting 1 is not helping with your normal map. This is generally done to overcome issues associated with storing normals in unsigned/normalized texture formats. Unless your normal G-Buffer is a signed/unnormalized format, you probably need to pack and unpack your normals using
* 0.5 + 0.5
when you first write to and* 2.0 - 1.0
when you sample the texture.In any case, there are multiple approaches to SSAO and many do not even use surface normals at all. So the discussion of which vector space you store your normals in is often overlooked.
I strongly suspect that your normals are in view space, rather than world space. If you multiplied your normal by the "normal matrix" in your vertex shader, like many tutorials will have you do, then your normals will be in view space.
It turns out that view space normals really are not that useful, with the number of post-processing effects these days that work better using world space normals. Most modern deferred shading engines (e.g. Unreal Engine 4, CryEngine 3, etc.) store the normal G-Buffer in world space and then transform it into view space (if needed) in the pixel shader.
By the way, I have included some code that I use to reconstruct the object space position from the traditional depth buffer. You appear to be using view space position / normals. You might want to try everything in object/world space.
It takes a little additional setup in the vertex shader stage of the deferred shading lighting pass, which looks like this: