Soft blending do it yourself? [SOLVED]

Hi,

I am trying to create my own soft blending for particles. My idea is to compare the eye space depth value of a particle fragment to the eye space depth value of the pixel in the render buffer that is at the same location where the particle fragment is project to.

So I do the following:

  • render all other stuff (geometry, objects)
  • read the depth buffer
  • render particles, using the depth buffer in a fragment shader to compare the particle fragment’s depth with the depth of the pixel that has already be stored in the render buffer

Now to have the particle fragment’s eye space position I simply need to to do “float4 particleVertex = gl_ModelViewMatrix * gl_Vertex” in the vertex shader and pass particleVertex as a varying to the fragment shader - right or wrong?

How do I retrieve the eye space depth value of a pixel from its Z value from the depth buffer please?

Or do I have to do the comparisons via world space?

You should do it with linear-z values instead of the exp values (that are stored in the depth buffer by default).

One way of doing this is:


uniform sampler2D DepthMap;
uniform vec2      CameraRange; // near and far z
uniform vec2      InvViewportSize; // 1.0/width, 1.0/height

float calc_depth( in float z )
{
  return (2.0 * CameraRange.x) / (CameraRange.y + CameraRange.x - z*(CameraRange.y - CameraRange.x) );
}

void main()
{
  vec2 uv = gl_FragCoord.xy * InvViewportSize;
  float z1 = calc_depth( texture2D(DepthMap, uv).x );
  float z2 = calc_depth( gl_FragCoord.z );
  // do your stuff here
}

Hopefully that will work. But I haven’t tested it.

You are right about particleVertex.

If you convert the depth back to eye space, it will be easier. The x and y components of this vector will be the same as the pixel you are shading. Only the z component will need to be computed.

Here is the sequence of matrix multiplications which transform an object vertex in window coordinates :

(RES means the result of the previous operation)

Operation => Result
ModelViewMatrix * gl_Vertex => Eye space

ProjectionMatrix * RES => Clip space

RES.xyz / RES.w => Normalized device coordinate. Visible pixels will have x,y,z between -1 and 1. w == 1

RES.xy = (RES.xy * 0.5 + 0.5) * viewport_size + viewport_offset
RES.z = RES.z * 0.5 + 0.5
=> RES.xy has the pixel coordinates
=> RES.z is the pixel depth

You have to do the inverse of this for the pixel’s depth only to get back the required Z component.

Take a look at this for more information about the operation.

You can see the code of DirectX implementation of this in the DirectX SDK. The sampe if “SoftParticles” from June 2006.

datsua,

if zNear == 1.0 and zFar = 5000.0,

shouldn’t I get zNear (1.0) from your function calc_depth for z == 0.0 and zFar (5000.0) for z == 1.0?

(min) 2.0 / (5001.0 - 0.0 * 4999.0) = 0.0003999
(max) 2.0 / (5001.0 - 1.0 * 4999.0) = 1.0

Even if I multiplay that with zFar, I will get 1.999 (~2) for the min case. Shouldn’t that be 1.0?

If I reverse your function, then it would yield a scaled (exponential?) z value of 1.0 for an input z of 5000.0 (zFar), but a scaled value of -1.0 for an input value of 1.0 (zNear).

If I knew the function OpenGL uses to scale input z, I could reverse it myself I think, but I don’t know it.

Edit:

Forget it. z e [-1.0, 1.0].

This works most nicely now. Thank you very much. :slight_smile:

bertgp,

I am doing OpenGL and while I tried a few times never really understood DirectX samples. :wink:

I think however that as gl_FragCoord.z and depthBuffer.z both have been projected, it should be sufficient to linearize these two z values to get them back into eye space, and with datsua’s code from above this seems to work very well for me now.