How to force unquantized (-1.0->1.0) depth in GLSL


What are the key parameters to look at in order to have fragment depth mapped to the range (-1.0 -> 1.0) instead of the range (near plane -> far plane) ? I’m trying to get rid of the far plane divide in the following (extremely simple) shader code because I don’t think it’s neat :slight_smile:

Vertex shader:

void main( void ) {
	gl_Position = ftransform();

Fragment shader:

void main (void) {
	float result=gl_FragCoord.z/gl_FragCoord.w;
	result/=10.0; // <--- this must go
	gl_FragColor = vec4(result,result,result,1.0);

I’m rendering to an FBO which has a depth attachment with internal and external format GL_DEPTH_COMPONENT and have tried several texture types (GL_FLOAT, GL_HALF_FLOAT, GL_UNSIGNED_BYTE) but nothing seems to make it so the frag depth is ranged from -1.0 to 1.0, the values always range from 0 to 10.0 (far plane) :stuck_out_tongue:

It’s something simple I’m overlooking but I can’t put my finger on it. Maybe you can?


That division (which you should make a multiplication) is the fastest way.
Could move it to vtxshader;

No MRT, just single rendertarget to an fbo. Sure it can be moved to the vertex shader (actually already did that using a varying float) but I posted it as being in the fragment shader to keep the example as simplified as possible.

You see, whatever code or book I look at, everyone seems to start off with a given -1.0,1.0 range. So somewhere in my code, which is huge, some annoying little thing is making it so that this is no longer the case; this just makes me curious as to the possible parameters which determine these 2 behaviours.

What are the key parameters to look at in order to have fragment depth mapped to the range (-1.0 -> 1.0) instead of the range (near plane -> far plane) ?

I don’t understand your problem. After perspective divide (I mean, after dividing by w), you get normalized device coordinates, which actually are in the [-1, 1]. So you don’t need to divide by the far plane distance or something! :slight_smile:
Then the actual deprh stored in the depth buffer is in the [0 1] range which is 0.5 * [-1 1] + 0.5

Hehe :slight_smile: yes it is confusing and I’ve been going over and over it in my head for a while as well so my explanation is probably pretty confusing in itself.

What I observed is that after perspective divide I’m not getting normalized coordinates. What I’m getting is (well erm, “looked to me like”) the actual distance from the near plane to the frag.

So let’s say that if you have a near plane of 1 and a far plane of 21, and draw a frag smack in the middle at distance 11, then after perspective divide, what you should end up with is roughly 0.0 and what goes in the depth buffer is 0.5 right, that makes sense.

But instead, what I’m getting after perspective divide is 10.0, so in order to fix this and get to that 0.5 that needs to go into the depth buffer (well shadow map in this case), I need to divide that 10.0 by 20(=near-far). For some reason, these coordinates seem to indeed be quantized to the integer(?) distance between the planes.

Don’t break your head over this though, all the code works but it does seem bizarre and needless so I’m just throwing it out there in case someone else might have encountered this in the past or something; and seeing as google is no help, this seems like a good place to leave a solution for others in the future :slight_smile:

gl_FragCoord.z already is window space Z, i.e. the value that will be stored in the Z buffer. gl_FragCoord.w is interpolated (1 / Wclip). Why exactly are you dividing those values?

Do you actually want w-buffering instead of z-buffering?