Compute eye space coord from window space coord?

Not necessarily as you have stated it.

The orange book states:
shadow2Dproj computes the texture coord.z / coord.w and compares the third texture coord component (.z) with the value read from the bound depth sampler you provided.

Therefore it’s important you have the correct 3rd component for the shadow texture coord and a proper depth texture bound.

I have bound the current scene’s color and depth buffers and the shadow map (need the color buffer because the shader outputs scene + shadow).

So shadow2DProj compares scene depth to shadow depth for me, doesn’t it?

I have bound the current scene’s color and depth buffers and the shadow map (need the color buffer because the shader outputs scene + shadow).

Is that 3 bound textures then: color + depth buffer + shadow map?

Usually when rendering a shadow, you setup the camera and projection from the lights point of view and render the scene into an FBO which only contains a depth attachment. The only time you would have a colour attachment to this FBO is when you are using an alternative shadow generation technique where you don’t just need gl_Position.z stored - but some custom values instead, eg VSM or SAVSM in which case the colour attachment is a 32-bit two channel float.

What you then call the ‘shadowmap’ is up to you: for hardware PCF shadowmapping the FBO depth buffer is the shadowmap; for VSM the colour attachment is the shadowmap.

So shadow2DProj compares scene depth to shadow depth for me, doesn’t it?

Becareful with what you intend here - as I tried to point out previously. The texture coord you supply is inportant because the texcoord.z will be compared with the bound texture sampler value. If you don’t want this behaviour then switch to texture2DProj instead. The advantage of shadow2DProj is the h/w assisted z/w divide and the h/w assisted PCF filtering and compare.

Back to your question - I don’t know. It depends upon your texure coordinate (z component) and which of the 3 textures you bound as the ‘shadow map’. I can tell you that shadow2DProj is expecting a depth_component texture to be bound and decalared as sampler2DShadow and so is not a suitable instruction for VSM shadowmapping because you need to bind the RG32F colour attachment instead.

Therefore you should be able to answer the question yourself:

  1. Have I bound the light FBO depth buffer texture (a depth_component format )
  2. Have I declared the sampler as sampler2DShadow
  3. Have I enabled GL_LINEAR filtering (for free h/w PCF filtering)
  4. Have I set the texture parameters:
    TEXTURE_COMPARE_MODE := GL_COMPARE_R_TO_TEXTURE;
    TEXTURE_COMPARE_FUNC := GL_LEQUAL;
    DEPTH_TEXTURE_MODE := GL_LUMINANCE;
  5. Have I set the clamp modes to CLAMP_TO_EDGE or CLAMP_TO_BORDER (and set the border colour to white) to prevent shadow out of bound errors
  6. Am I generating the shadow texture coordinates correctly so that the Proj will divide the z/w for me and my z component will be compared to the depth written in the depth texture.

You see on note 6, if you need to play around with the depth value comming out of the depth texture (to covert it in any way- eg to NDC space), then shadow2DProj or shadow2D is not going to work. You’d need to use texture2DProj instead.

BionicBytes,

I don’t know whether you have read the entire thread, but I am trying to render a shadow into a frame buffer as post process. So I render the shadow map as depth only in one render pass, then render the scene w/o shadow, then apply to shadow map to the scene - kind of deferring shadowing.

The post process shader blends the shadow map into the scene and returns the darkened or lit scene fragments depending on whether they are in shadow or not.

I have implemented deferred shadowing into my engine to complement the deferred lighting.

In this way not only is the lighting decoupled from the geometry, but the shadow generation techniques (VSM, PCF, SAVSM, CVSM, etc) are decoupled from the lighting shaders.

After the various shadow maps have been created for each scene light (using VSM, PCF, etc) a 2D post process is used to create a shadow mask - this is a 4 channel RGBA8 texture which will be used to gather upto 4 scene light shadow contributions and then accessed during the lighting phase. It is only during this post process where the shadow comparisions take place and the results of the comparisions are written to a colour texture (aka the shadow mask) and contain ‘shadow occlusion values’. This texture can be blurred safely unlike shadow maps.

During the lighting phase, the shadow mask texture (a RGBA8 colour) is then bound and accessed in the various lighting shaders and the beauty is that I only ever need to access the RGBA8 shadow mask texture and therefore only need one varient of the lighting shader no matter which technique is used to generate the shadows in the first place.

Now…I have been reading this thread with great interest. I may actually have been slow off the mark, but I did not realise you were creating a deferred shadow system (although you did say something about a post-process which I did not cotton on to). Does your system match what I am doing (which came from Crysis and other games)?

The reason why I ask all of this is that in the deferred system I store eye-space vertex positions of the geometry in the G-buffer (rather than having to reconstruct from scene depth). When rendering the scene from the lights POV the gl_Vertex will get transformed into eye-space. Therefore to calculate the shadow map texture coordinates you need the scale bias * lightProjection matrix * light view matrix * Inv scene camera matrix * gl_Vertex of G-Bufffer

I use the following calculation to generate a matrix to pass to the shadow compare shader (the one creating the post-process shadow mask)

Procedure setShadowMatrix (var projection,view: TMatrix);
const offset: GLMatrixf = (0.5,0,0,0,  0,0.5,0,0,  0,0,0.5,0,  0.5,0.5,0.5,1);
begin
  glloadmatrixf (@offset[0]);				//convert clip space to texture space
  glMultMatrixf (@projection.glmatrixf[0]);		//light’s projection
  glMultMatrixf (@view.glmatrixf[0]); 			//light’s camera
  glMultMatrixf (@CameraMatrix_inv.glmatrixf[0]); 	//scene inv camera 	
  glGetFloatv(GL_TEXTURE_MATRIX, @shadowmatrix.glmatrixf[0]);
end;

Hence what I just said above: scale bias * lightProjection matrix * light view matrix * Inv scene camera matrix
The idea here is to end up in eye space because of the next peice below:


//--------shadowing apply: texture compare GLSL shader snippet----------------------------------------------
//Shadow Texturematrix[0]=scale_bias * light project matrix * light camera view * scene camera view_inverse
   shadowCoord = gl_TextureMatrix[0] * vec4 (ecEyeVertex.xyz, 1.0);	//ecEyeVertex.w must be 1.0 or projected shadows not correct
   shadowCoordPostW = shadowCoord / shadowCoord.w;	//only need this when sampler is not shadow2D variant

The idea in the shadow compare is to compare the z of the original scene (eye-space position as stored in the G-buffer) against the lights z value (in the shadow map texture). The trick is to ensure the computed shadow texture coordinates contain the original scenes verterx at any one pixel. Since my G-Buffer stores eye space vertex position I needed to undo the original eye-space camera translation (hence the multiply by inv camera) to obtain object-space gl_Vertex.xyz for the original scene

This is accomplished with: gl_TextureMatrix[0] * vec4 (ecEyeVertex.xyz, 1.0);

So now I have: Shadow Texture coords=scale_bias * light project matrix * light camera view * scene camera view_inverse * ecEyeVertex.xyz
so I have obtained the projected position of the orginal scene vertex by the LIGHTs camera (light eye-space) and converted to texture coordinates.
This is ready to be compared to the lights depth texture using Shadow2Dproj command.
So to be explicit, the texture coordinates now contain the standard scene vertex - but transformed by the lights camera
and the shadow map contains the scene depth - transformed by the lights camera.

Both of these are in texture-space [0…1] range, due to the scale bias * lightProjection matrix transforms, and because both are in the same space the comparison is valid.

OK, so why the long post?
Well I think you may have tried to shortcut the process by directly going into clip space (just my opinion). You have also tried to compute the eye-space of the vertex from NDC. The problem is that each step along the way needs to be verified and checked. Since you generally can’t debug GLSL - it’s impossible to check & hence some of the problems.

I have tried to explain what I do and in doing so help you with yours even if I am using eye-space for everything and the convienience of the deferred G-Buffer. When I first started all of this I was convinced that OpenGL fixed functionality was nuts doing everything in eye space, and that I would be better of using what ever space I wanted. But, more and more, eye-space is very convienient for all sorts of reasons. perhaps I am suggesting you do things in eye-space through out and that WILL simplify all your calculations and comparisions.

I would like to see you getting this to work in with the least amount of effort and time (even if that means eye-space for now). Later on you can show us all just how to do this in NDC or clip space and show us why that’s better (even if it’s just a convienience for you).

Thanks for your input. I am doing this similarly, I just compute the vertex in the shader for now. I have been considering storing camera space coordinates in a second render target when rendering my scene, but that is getting complicated because I am applying a few shaders already when rendering the scene; I haven’t spent much though yet on how to add a shader that would only fill an extra render target with the coordinates, or expand all the other shaders appropriately (and I want to keep the number of render passes as low as possible).

Shadow map blending actually works quite well now. The next step will be to render the shadow maps to an empty color buffer using the scene depths to properly discard fragments, blur the resulting color buffer (which should contain the shadows as RGB), and then just slap that texture on the scene - just to have soft blurred textures.

My far goal is to implement deferred lighting with shadow maps. I will need to change shadow map handling for that a bit, but until then the above route is the one I have chosen to go.

The biggest problem I am facing right now that rendering a shadow map with a FOV of 180 deg doesn’t show a 180 deg view of the scene the shadow light source should see. I think I will switch to singular (for directed lights) and dual (for point lights) paraboloid shadow maps since these give you a 360 deg mapping, but save you 2 - 4 shadow map calculations required for cubic shadow maps (http://wiki.delphigl.com/index.php/GLSL_Licht_und_Schatten#Beschleunigtes_Rendern; German - sorry). Another limitiation I will add is that only moving lights will cast shadows: In my application these are the only light sources I have no lightmaps for; so the app has to do the full lighting for these - hooking up shadow maps there seems logical. These lights will also create moving shadows which should add a lot of dynamics to the game. Since the game has a lot of light sources, only the lights closest to the player will cast shadows, and the shadow will be lighter the further away it is from the light source. That should avoid too hard effects with shadows suddenly popping in and out of the scene.

So your shadow swimming has been fixed?
You are able to use Shadow2DProj if you so wish?

Can you post your final code to:

  1. Produce the shadow matrix texture coords
  2. GLSL shader to reconstruct EYE space from scene depth texture
  3. GLSL shader to perform shadow map lookup/comparison using eye, NDC or clip space - please advise which space it’s working in.

What do you mean by 180 degree FOV? You wouldn’t put that into gluPerspective would you - just seems a rather large value?

Swimming fixed: Yes
shadow2DProj working: No

You may need to ponder a little on the implementation details of the following code, but it shouldn’t be too hard to understand.

OpenGL matrix implementation:


#ifndef _OGLMATRIX_H
#define _OGLMATRIX_H

#include <string.h>
#include "glew.h"

class COGLMatrix {
  private:
  double  m_data [16];
  GLfloat  m_dataf [16];

  public:
  inline COGLMatrix& operator= (const COGLMatrix& other) { 
    memcpy (m_data, other.m_data, sizeof (m_data)); 
    return *this;
    }

  inline COGLMatrix& operator= (const double other [16]) { 
    memcpy (m_data, other, sizeof (m_data)); 
    return *this;
    }

  COGLMatrix Inverse (void);

  COGLMatrix& Get (GLuint nMatrix, double bInverse = false) { 
    glGetDoublev (nMatrix, (GLdouble*) m_data); 
    if (bInverse)
    *this = Inverse ();
    return *this;
    }
  void Set (void) { glLoadMatrixd ((GLdouble*) m_data); }

  void Mul (void) { glMultMatrixd ((GLdouble*) m_data); }

  double& operator[] (int i) { return m_data [i]; }

  GLfloat* ToFloat (void) {
    for (int i = 0; i < 16; i++)
      m_dataf [i] = GLfloat (m_data [i]);
    return m_dataf;
    }

  COGLMatrix& operator* (double factor) {
    for (int i = 0; i < 16; i++)
    m_data [i] *= factor;
    return *this;
    }

  double Det (COGLMatrix& other) { return m_data [0] * other [0] + m_data [1] * other [4] + m_data [2] * other [8] + m_data [3] * other [12]; }
  };

#endif //_OGLMATRIX_H

Inverse function:


COGLMatrix COGLMatrix::Inverse (void)
{
	COGLMatrix im;

im [0] =  m_data [5] * m_data [10] * m_data [15] - m_data [5] * m_data [11] * m_data [14] - m_data [9] * m_data [6] * m_data [15] + m_data [9] * m_data [7] * m_data [14] + m_data [13] * m_data [6] * m_data [11] - m_data [13] * m_data [7] * m_data [10];
im [4] = -m_data [4] * m_data [10] * m_data [15] + m_data [4] * m_data [11] * m_data [14] + m_data [8] * m_data [6] * m_data [15] - m_data [8] * m_data [7] * m_data [14] - m_data [12] * m_data [6] * m_data [11] + m_data [12] * m_data [7] * m_data [10];
im [8] =  m_data [4] * m_data [9] * m_data [15] - m_data [4] * m_data [11] * m_data [13] - m_data [8] * m_data [5] * m_data [15] + m_data [8] * m_data [7] * m_data [13] + m_data [12] * m_data [5] * m_data [11] - m_data [12] * m_data [7] * m_data [9];
im [12] = -m_data [4] * m_data [9] * m_data [14] + m_data [4] * m_data [10] * m_data [13] + m_data [8] * m_data [5] * m_data [14] - m_data [8] * m_data [6] * m_data [13] - m_data [12] * m_data [5] * m_data [10] + m_data [12] * m_data [6] * m_data [9];
im [1] =  -m_data [1] * m_data [10] * m_data [15] + m_data [1] * m_data [11] * m_data [14] + m_data [9] * m_data [2] * m_data [15] - m_data [9] * m_data [3] * m_data [14] - m_data [13] * m_data [2] * m_data [11] + m_data [13] * m_data [3] * m_data [10];
im [5] =   m_data [0] * m_data [10] * m_data [15] - m_data [0] * m_data [11] * m_data [14] - m_data [8] * m_data [2] * m_data [15] + m_data [8] * m_data [3] * m_data [14] + m_data [12] * m_data [2] * m_data [11] - m_data [12] * m_data [3] * m_data [10];
im [9] =  -m_data [0] * m_data [9] * m_data [15] + m_data [0] * m_data [11] * m_data [13] + m_data [8] * m_data [1] * m_data [15] - m_data [8] * m_data [3] * m_data [13] - m_data [12] * m_data [1] * m_data [11] + m_data [12] * m_data [3] * m_data [9];
im [13] =  m_data [0] * m_data [9] * m_data [14] - m_data [0] * m_data [10] * m_data [13] - m_data [8] * m_data [1] * m_data [14] + m_data [8] * m_data [2] * m_data [13] + m_data [12] * m_data [1] * m_data [10] - m_data [12] * m_data [2] * m_data [9];
im [2] =   m_data [1] * m_data [6] * m_data [15] - m_data [1] * m_data [7] * m_data [14] - m_data [5] * m_data [2] * m_data [15] + m_data [5] * m_data [3] * m_data [14] + m_data [13] * m_data [2] * m_data [7] - m_data [13] * m_data [3] * m_data [6];
im [6] =  -m_data [0] * m_data [6] * m_data [15] + m_data [0] * m_data [7] * m_data [14] + m_data [4] * m_data [2] * m_data [15] - m_data [4] * m_data [3] * m_data [14] - m_data [12] * m_data [2] * m_data [7] + m_data [12] * m_data [3] * m_data [6];
im [10] =  m_data [0] * m_data [5] * m_data [15] - m_data [0] * m_data [7] * m_data [13] - m_data [4] * m_data [1] * m_data [15] + m_data [4] * m_data [3] * m_data [13] + m_data [12] * m_data [1] * m_data [7] - m_data [12] * m_data [3] * m_data [5];
im [14] = -m_data [0] * m_data [5] * m_data [14] + m_data [0] * m_data [6] * m_data [13] + m_data [4] * m_data [1] * m_data [14] - m_data [4] * m_data [2] * m_data [13] - m_data [12] * m_data [1] * m_data [6] + m_data [12] * m_data [2] * m_data [5];
im [3] =  -m_data [1] * m_data [6] * m_data [11] + m_data [1] * m_data [7] * m_data [10] + m_data [5] * m_data [2] * m_data [11] - m_data [5] * m_data [3] * m_data [10] - m_data [9] * m_data [2] * m_data [7] + m_data [9] * m_data [3] * m_data [6];
im [7] =   m_data [0] * m_data [6] * m_data [11] - m_data [0] * m_data [7] * m_data [10] - m_data [4] * m_data [2] * m_data [11] + m_data [4] * m_data [3] * m_data [10] + m_data [8] * m_data [2] * m_data [7] - m_data [8] * m_data [3] * m_data [6];
im [11] = -m_data [0] * m_data [5] * m_data [11] + m_data [0] * m_data [7] * m_data [9] + m_data [4] * m_data [1] * m_data [11] - m_data [4] * m_data [3] * m_data [9] - m_data [8] * m_data [1] * m_data [7] + m_data [8] * m_data [3] * m_data [5];
im [15] =  m_data [0] * m_data [5] * m_data [10] - m_data [0] * m_data [6] * m_data [9] - m_data [4] * m_data [1] * m_data [10] + m_data [4] * m_data [2] * m_data [9] + m_data [8] * m_data [1] * m_data [6] - m_data [8] * m_data [2] * m_data [5];

double det = Det (im);
if (det == 0.0)
	return *this;

det = 1.0 / det;
for (int i = 0; i < 16; i++)
	im [i] *= det;
return im;
}

Shadow matrix texture coords:


// The following code is called after modelview and projection matrices have been stuffed with the proper values. modelView and projection are instances of a simple class to handle OpenGL matrices in the application. Not using bias - this is handled by the shader.

static void ComputeShadowTransformation (int nLight)
{
modelView.Get (GL_MODELVIEW_MATRIX); // load the modelview matrix
modelView.Get (GL_PROJECTION_MATRIX); // load the projection matrix
glActiveTexture (GL_TEXTURE1 + nLight);
projection.Set ();
modelview.Mul ();
lightManager.ShadowTransformation (nLight).Get (GL_TEXTURE_MATRIX);
}

Compute inverse modelview * inverse projection. Inverse code from MESA source code.


ogl.SetupTransform ();
lightManager.ShadowTransformation (-1).Get (GL_MODELVIEW_MATRIX, true); // inverse
lightManager.ShadowTransformation (-2).Get (GL_PROJECTION_MATRIX, true); 
ogl.ResetTransform ();
glPushMatrix ();
lightManager.ShadowTransformation (-1).Set ();
lightManager.ShadowTransformation (-2).Mul ();
lightManager.ShadowTransformation (-3).Get (GL_MODELVIEW_MATRIX, false); // inverse (modelview * projection)
glPopMatrix ();

Fragment shader. The shader also makes the shadow lighter depending on distance of geometry to light. The shader does what the bias matrix would do. I can’t make it work otherwise.


uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
uniform sampler2D shadowMap;
uniform mat4 modelviewProjInverse;
uniform vec3 lightPos;
uniform float lightRange;

#define ZNEAR 1.0
#define ZFAR 5000.0
#define A 5001.0 //(ZNEAR + ZFAR)
#define B 4999.0 //(ZNEAR - ZFAR)
#define C 10000.0 //(2.0 * ZNEAR * ZFAR)
#define D (cameraNDC.z * B)
#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))

void main() 
{
float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;
vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;
vec4 cameraClipPos;
cameraClipPos.w = -ZEYE;
cameraClipPos.xyz = cameraNDC * cameraClipPos.w;
vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;
float w = abs (lightClipPos.w);
// avoid divides by too small w and clip the shadow texture access to avoid artifacts
float shadowDepth = 
   ((w < 0.00001) || (abs (lightClipPos.x) > w) || (abs (lightClipPos.y) > w)) 
   ? 2.0 
   : texture2D (shadowMap, lightClipPos.xy / (lightClipPos.w * 2.0) + 0.5).r;
float light = 1.0;
if (lightClipPos.z >= (lightClipPos.w * 2.0) * (shadowDepth - 0.5)) {
   vec4 worldPos = modelviewProjInverse * cameraClipPos;
   float lightDist = length (lightPos - worldPos.xyz);
   light = sqrt (min (lightDist, lightRange) / lightRange);
   }
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}
";

I wanted to render the shadow map with a 180 deg FOV to have it cover the actual half sphere illuminated by the corresponding light source. Didn’t work though (that would just have been to easy - heh!).

Don’t think so. Your light window coordinates are 0…1, 0…1, and you’ve got the *0.5+0.5 in there to take your NDC coords to that space.

shadow2DProj compares against the depth from the frame buffer, doesn’t it? And that doesn’t work for me.

By itself? No (not AFAIK). But in GLSL 1.2 and earlier, that’s one required piece out of four to do hardware depth comparisons.

In GLSL 1.2 and earlier, you have to do all of these things:

[ol][li]use a shadow2D* texture access function (such as shadow2DProj) to sample the depth texture,[]use a sampler2DShadow sampler in the shader for the depth texture, []bind a depth texture to it in your app, and [*]set the depth compare attributes on the depth texture before invoking your shader.[/ol][/li]texture2DProj merely does the texcoord.xyz/.w step and can be used totally independently of depth textures and depth compare. While shadow2DProj does that plus implies doing a depth compare too (with the extra .z texcoord component you passed in), assuming you’ve done all the other things above.

In GLSL 1.3 and later, they realized that it was pointless to have the explosion in texture sampling function names based on texture type, shadow/non-shadow, etc. so #1 in the above list simplifies in GLSL 1.3+ to just calling the “texture” texture sampling functions (e.g. texture, textureProj, etc. – all overloaded by sampler type).

As an example of how to set up depth compare on the texture:


  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER  , GL_NEAREST );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER  , GL_NEAREST );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S      , GL_CLAMP_TO_EDGE );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T      , GL_CLAMP_TO_EDGE );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL );
  glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE  , GL_INTENSITY );

Thanks. I know that, but my shader doesn’t even work when using texture2DProj instead of computing everything manually, and it also doesn’t work right when comparing the depth value from the scene with the corresponding depth value from the shadow map. The only thing that works is to compute the shadow map depth value of the related scene fragment and compare that to the shadow map depth value for the corresponding shadow map position stored in the shadow map (in other words: Compute eye position of fragment in scene, compute light window position from that, compare that light window position’s depth value with the depth value in the shadow map).

This is certainly due to an oversight or misunderstanding on my side, but I haven’t yet figured where or why that had happened.

Anyway, what is texture2DProj good for when it only divides by w and doesn’t also do the scaling and translation? After all,

vec3 lightWinPos = lightClipPos.xyz / lightClipPos.w * 2.0 + 0.5;

isn’t it? So since I couldn’t even get this to work when using texture2DProj, I didn’t even bother trying shadow2DProj, since it is just based on texture2DProj and does something on top of it (depth value lookup and comparison). Now of course OpenGL is not the problem here, but rather my limited or wrong understanding of this function, so some enlightenment would be more than welcome. :slight_smile:

Another question: Does multiplication with the bias matrix just do the scaling and translation, or also the w divide?

Well, it’s actually *0.5+0.5.

And if you slip that *0.5+0.5 in your shadow matrix (as you did – the “bias” matrix), then you can effectively do it first and defer the perspective divide until the very last operation before texture sampling (either doing it yourself or letting texture2DProj/shadow2DProj do it for you). This is common.

Ack, that’s because I forgot the brackets around (w * 2.0). They are present in my code though.

vec3 lightWinPos = lightClipPos.xyz / (lightClipPos.w * 2.0) + 0.5;

Another question: Does multiplication with the bias matrix just do the scaling and translation, or also the w divide?

Just the scale and translation. Effectively, it takes you from 4D clip space (in-frustum is -w <= x,y,z <= w) to 4D window space (in-frustum is 0 <= x’,y’,z’ <= w’).

If you look at the bias matrix you can convince yourself that it does exactly that. What’s it does to X for example is: x’ = x/2 + w/2. Right? And how do we transform the above equality from -w <= x <= w (CLIP SPACE) to 0 <= x <= w (WINDOW SPACE). First, we divide by 2, yielding -w/2 <= x/2 <= w/2. Then we add w/2, yielding 0 <= x/2 + w/2 <= w.
So x’ = x/2 + w/2 and w’ = w.

Thanks. Knowing all that I have been able to make the shader work using shadow2DProj.

Here’s the vertex shader code:


uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
uniform sampler2DShadow shadowMap;
uniform mat4 modelviewProjInverse;
uniform vec3 lightPos;
uniform float lightRange;

#define ZNEAR 1.0
#define ZFAR 5000.0
#define A 5001.0 //(ZNEAR + ZFAR)
#define B 4999.0 //(ZNEAR - ZFAR)
#define C 10000.0 //(2.0 * ZNEAR * ZFAR)
#define D (cameraNDC.z * B)
#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))

void main() 
{
float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;
vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;
vec4 cameraClipPos;
cameraClipPos.w = -ZEYE;
cameraClipPos.xyz = cameraNDC * cameraClipPos.w;
vec4 lightWinPos = gl_TextureMatrix [2] * cameraClipPos;
float w = abs (lightWinPos.w);
int lit = ((w < 0.00001) || (abs (lightWinPos.x) > w) || (abs (lightWinPos.y) > w)) ? 1.0 : shadow2DProj (shadowMap, lightWinPos).r;
float light;
if (lit == 1)
   light = 1.0;
else {
   vec4 worldPos = modelviewProjInverse * cameraClipPos;
   float lightDist = length (lightPos - worldPos.xyz);
   light = sqrt (min (lightDist, lightRange) / lightRange);
   }
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}

I hope everybody is satisfied with my choice of variable names. :smiley:

Excellent! Congrats!

Thanks to everybody who has helped me with understanding this. :slight_smile: