Issues with SSAO and Depth Map

Hello,

I’m currently writing an implementation for SSAO (using depthmap, normals and noise) but for some reason it keeps giving me these step lines, the same ones you see on the fading colors over a plane.

What is weird is that I’m storing my depth map in a RGBA16, and still getting these lines which in turn are reflected onto the SSAO:

and these are the lines when I move to a wider position:

i’m linearizing the depth map to use in the ssao:

float linearize_depth(float d,float zNear,float zFar){
    return zNear * zFar / (zFar + d * (zNear - zFar));
}

float d = linearize_depth(gl_FragCoord.z, 0.1, 1)*0.5;
gl_FragColor = vec4(d, d, d, 1.0);

But i’m thinking that these lines are comming from the depth map, because they are the same ones as the SSAO render.

How can I avoid these, because every implementation of SSAO I do ends up having these lines all over it.

regards,
Jakes

Can you elaborate on that?

A depth map is a single channel; why are you using a 4-channel format?

Hey GClements,

Yes, of course, I’m sorry about the short and abstract description.

I’m currently upgrading my system to render a scene with the well known geometry data maps (Position, Normal, Depth, Color, etc…)

So far, and as easy as the concept might be, there are plenty (if not infinite) ways of doing each capture, and my issue is with some of them, for example:

  • Normal Map:

    • Which values I should pass there? The normal for each fragment? = gl_NormalMatrix * VertexNormal? I’ve read on other places saying that it should be transformed to to CameraView or SpaceView?
  • Depth Map:

    • Now this one is the most complex, because I know that this should be using the distance from the camera to each fragment, the issue here is how to get that correctly, meaning should this be linear, non-linear or other?
      So I’m using the following:
      DepthValue = (gl_FragCoord.z/gl_FragCoord.w)*0.1 and it seems to work properly, but something tells me its not ok.

Now my main concerns:

  • Using a texture to render the depthmap using RGBA8 with UByte, gives me Shadow Acne lines all over it, whereas using a GL_R32F with GL_FLOAT outputs a better result, but still need to tweak the multiplier on the calculations for the depth value when I have a bigger far range.

So, so far my main concern is, what is the correct/best way for storing such darta and the correct math to calculate it

Yes, after a few tweaks, I changed my storage format to GL_R32F

Your linearize_depth function is correct; it returns -Zeye (visible Zeye values are negative).

gl_FragCoord.z/gl_FragCoord.w doesn’t make sense because gl_FragCoord.z is depth in the [0,1] range. If you want ZNDC, you need to use ZNDC=2*depth-1. Multiplying ZNDC by Wclip (i.e. dividing by gl_FragCoord.w) gives you Zclip, which will be affine to Zeye (i.e. linearised). Specifically, Zeye=Zclip*(zNear-zFar)/(zNear+zFar) - 2*zFar*zNear/(zNear+zFar).

So, just to clarify:

  • gl_fragCoord.z → yields out a value from 0…1, in a logaritmic scale
  • 2.0*depth-1.0 → turns it into -1,1
  • (2.0*depth-1.0) / gl_FragCoord.w → Zclip, which are the limits in a special format?
  • Zclip*(zNear-zFar)/(zNear+zFar) → 2zFarzNear/(zNear+zFar).

So below formula is the whole process to turn gl_FragCoord.z into a linear range of the depth from the near to the far frustum plane?

LinearDepth = ((2.0gl_fragCoord.z-1.0) / gl_FragCoord.w)(zNear-zFar)/(zNear+zFar) - 2zFarzNear/(zNear+zFar).

Regards

It isn’t logarithmic, it’s reciprocal; depth=(ZNDC+1)/2, ZNDC=C+D/Zeye where C,D are determined by the near/far planes.

Well, if you’re using a floating-point depth buffer, it’s sort of logarithmic in the sense that floatBitsToInt(x) approximates a·log2(x)+b. For that case, it’s common to set up the perspective projection so that depth is 1 (or -1) at the near plane and 0 at the far plane, as the increasing floating-point precision near zero offsets the decreasing Z scale factor. See glClipControl and glEnable(GL_DEPTH_CLAMP).

Also, note that the value in the depth buffer is affected by the glDepthRange settings. These are normally [0,1], but if you change them you have to adjust for that.

gl_Position is in clip coordinates. For a perspective projection, gl_Position.w isn’t constant (it’s usually -Zeye). NDC are gl_Position.xyz/gl_Position.w, but this transformation is done after clipping and interpolation (doing it before interpolation would give linear texture mapping rather than perspective-correct texture mapping).

Zeye = Zclip*(zNear-zFar)/(zNear+zFar) - 2*zFar*zNear/(zNear+zFar).
This is the inverse of the perspective transformation.

The third row of the perspective matrix is [0 0 C D] where
C = (zNear+zFar)/(zNear-zFar), D = 2*far*near/(near-far)

Zclip = C*Zeye+D*Weye
= C*Zeye+D (assuming Weye=1, which is almost invariably the case).
=> Zeye = Zclip/C - D/C

That gives you Zeye, which will be in in the range [-zFar,-zNear]. But for that it’s more efficient to ignore gl_FragCoord.w and just use:

Zeye = -zFar*zNear/(depth*(zNear-zFar)+zFar)

If you just want a linearised value and don’t care about the scale and offset, you only need

LinearDepth = (2*gl_FragCoord.z - 1) / gl_FragCoord.w

That gives you Zclip, which is in the range [-zNear,zFar].

Thanks,

those last 2 were a good example of what is more pratical to my use.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.