I have implemented deferred shadowing into my engine to complement the deferred lighting.
In this way not only is the lighting decoupled from the geometry, but the shadow generation techniques (VSM, PCF, SAVSM, CVSM, etc) are decoupled from the lighting shaders.
After the various shadow maps have been created for each scene light (using VSM, PCF, etc) a 2D post process is used to create a shadow mask - this is a 4 channel RGBA8 texture which will be used to gather upto 4 scene light shadow contributions and then accessed during the lighting phase. It is only during this post process where the shadow comparisions take place and the results of the comparisions are written to a colour texture (aka the shadow mask) and contain ‘shadow occlusion values’. This texture can be blurred safely unlike shadow maps.
During the lighting phase, the shadow mask texture (a RGBA8 colour) is then bound and accessed in the various lighting shaders and the beauty is that I only ever need to access the RGBA8 shadow mask texture and therefore only need one varient of the lighting shader no matter which technique is used to generate the shadows in the first place.
Now…I have been reading this thread with great interest. I may actually have been slow off the mark, but I did not realise you were creating a deferred shadow system (although you did say something about a post-process which I did not cotton on to). Does your system match what I am doing (which came from Crysis and other games)?
The reason why I ask all of this is that in the deferred system I store eye-space vertex positions of the geometry in the G-buffer (rather than having to reconstruct from scene depth). When rendering the scene from the lights POV the gl_Vertex will get transformed into eye-space. Therefore to calculate the shadow map texture coordinates you need the scale bias * lightProjection matrix * light view matrix * Inv scene camera matrix * gl_Vertex of G-Bufffer
I use the following calculation to generate a matrix to pass to the shadow compare shader (the one creating the post-process shadow mask)
Procedure setShadowMatrix (var projection,view: TMatrix);
const offset: GLMatrixf = (0.5,0,0,0, 0,0.5,0,0, 0,0,0.5,0, 0.5,0.5,0.5,1);
begin
glloadmatrixf (@offset[0]); //convert clip space to texture space
glMultMatrixf (@projection.glmatrixf[0]); //light’s projection
glMultMatrixf (@view.glmatrixf[0]); //light’s camera
glMultMatrixf (@CameraMatrix_inv.glmatrixf[0]); //scene inv camera
glGetFloatv(GL_TEXTURE_MATRIX, @shadowmatrix.glmatrixf[0]);
end;
Hence what I just said above: scale bias * lightProjection matrix * light view matrix * Inv scene camera matrix
The idea here is to end up in eye space because of the next peice below:
//--------shadowing apply: texture compare GLSL shader snippet----------------------------------------------
//Shadow Texturematrix[0]=scale_bias * light project matrix * light camera view * scene camera view_inverse
shadowCoord = gl_TextureMatrix[0] * vec4 (ecEyeVertex.xyz, 1.0); //ecEyeVertex.w must be 1.0 or projected shadows not correct
shadowCoordPostW = shadowCoord / shadowCoord.w; //only need this when sampler is not shadow2D variant
The idea in the shadow compare is to compare the z of the original scene (eye-space position as stored in the G-buffer) against the lights z value (in the shadow map texture). The trick is to ensure the computed shadow texture coordinates contain the original scenes verterx at any one pixel. Since my G-Buffer stores eye space vertex position I needed to undo the original eye-space camera translation (hence the multiply by inv camera) to obtain object-space gl_Vertex.xyz for the original scene
This is accomplished with: gl_TextureMatrix[0] * vec4 (ecEyeVertex.xyz, 1.0);
So now I have: Shadow Texture coords=scale_bias * light project matrix * light camera view * scene camera view_inverse * ecEyeVertex.xyz
so I have obtained the projected position of the orginal scene vertex by the LIGHTs camera (light eye-space) and converted to texture coordinates.
This is ready to be compared to the lights depth texture using Shadow2Dproj command.
So to be explicit, the texture coordinates now contain the standard scene vertex - but transformed by the lights camera
and the shadow map contains the scene depth - transformed by the lights camera.
Both of these are in texture-space [0…1] range, due to the scale bias * lightProjection matrix transforms, and because both are in the same space the comparison is valid.
OK, so why the long post?
Well I think you may have tried to shortcut the process by directly going into clip space (just my opinion). You have also tried to compute the eye-space of the vertex from NDC. The problem is that each step along the way needs to be verified and checked. Since you generally can’t debug GLSL - it’s impossible to check & hence some of the problems.
I have tried to explain what I do and in doing so help you with yours even if I am using eye-space for everything and the convienience of the deferred G-Buffer. When I first started all of this I was convinced that OpenGL fixed functionality was nuts doing everything in eye space, and that I would be better of using what ever space I wanted. But, more and more, eye-space is very convienient for all sorts of reasons. perhaps I am suggesting you do things in eye-space through out and that WILL simplify all your calculations and comparisions.
I would like to see you getting this to work in with the least amount of effort and time (even if that means eye-space for now). Later on you can show us all just how to do this in NDC or clip space and show us why that’s better (even if it’s just a convienience for you).