Compute eye space coord from window space coord?

I got that code from one of the pages you had linked to.

Oh right. Sorry, I’m apparently still half-asleep.

What I wanted to say is that your comments about the variable names I have chosen were completely inappropriate and uncalled for. I understood the names well, and from the very simple code they were used in it was quite clear what they were.

What is inappropriate about pointing out that they’re wrong? It doesn’t matter if you understand them today; it’s still wrong.

If you have this code:


vec4 normal = gl_Position;

It is certainly syntactically correct. But it is both misleading and confusing for anyone trying to read it. You may happen to understand it, but it’s still wrong.

You aren’t seriously proposing that I am gonna start to manually do matrix computations to find out which coefficients to use, do you?

Well, that’s the brute force way. The more elegant way is to actually look at the matrix and see from inspection which values are used to compute Zclip and which ones are not. A matrix multiplication is just a linear system of equations for computing the output values.

Or, if you just want someone to give you the answer, assuming you’re using the standard glFrustum/gluPerspective projection matrices, it’s the last two columns of the third row of the matrix (assuming standard mathematical matrix conventions). In OpenGL terms, it’s “Zclip = projmat[2][2] * Zeye + projmat[2][3]”.

Sorry. Just ran out of time and had to quit for the day.

I guess the artifacts are a result of the texture being created with GL_CLAMP.

Probably a good bet. GL_CLAMP samples from the “border color” when you get close to the edge, which is nonsensical for a depth map (AFAIK). Maybe you can set a border color of 100% white and have the shadow comparisons always fail past the edges with this (as that might be interpreted as the light space far-clip value). Don’t know.

But if you’re in doubt as to whether this is causing you problems, I’d be tempted to try GL_CLAMP_TO_EDGE – this ignores the border color and merely clamps to the edges of the texture. Then if that’s it, and you need something more elegant, you can add it.

karx11erx no need to become angry : when you ask for people to help and read your code, and to do it for free, it does mean you will receive feedback, good or bad.
Sorry I can’t actually help you, but AFAIK everything posted by Alfonse makes sense. Something “almost working” may well be a dead-end, and a better “not yet working” solution can actually be nearer the correct solution. Maybe there is something before the fragment shader that messes with .w ?

Ok, thanks Alphonse and Dark_Photon.

I am still having a problem though: The depth values from the shadow map seem to be a tad too large. The effect is that when I move the player ship close to a wall its shadow is projected on, the shadow disappears.

Window dimensions and projection settings are identical for light view and camera view.

Any ideas what could be wrong here?

ZBuffer,

I had already noticed the mistake I had made.

Sure, Alphonse is a very knowledgeable person and has given me the most valuable (and in fact the only correct) information about the subject covered here, and I appreciate his help. It would have been more of a pleasure though if he had not given comments about variable names I find rather pointless and a bit smartassed, when the topic is a completely different one. If you look at the variable I had named “colorDepth” you will see that just because Alphonse didn’t understand why I had called it that way it still made sense (it’s the depth value associated with the scene’s color buffer). This also is neither “production” nor complex code. The tone also plays a role here. Saying “btw, it might be a good idea naming that variable fragDepth, since …” would have made a big difference.

Great skill and knowledge are no excuse for a lack of good manners. So, yes, this is a public place, and everybody can comment and give feedback, including me.

Just off hand, that sounds odd. Shadow map window (viewport) dimensions should match the shadow map res, while camera view window (viewport) dimensions should match the window you’re rendering into, and they’re often not exactly the same.

And that the projection settings match sounds odd too, unless the FOV you are using for your light just happens to be exactly the same as the FOV for your camera, and they both happen to be symmetric perspective. Also, they camera and light will be in different positions pointing in different directions, so that alone will give you different projections for camera frustum vs. light frustum.

I’ll rescan your notes above to see if I see anything on a second go-round.

Afaik the projection doesn’t depend on view direction. The modelview takes care of that. The projection just clips the transformed vertices, and prepares ndc calculation.

I have different dimensions and FOV for camera and shadow maps (shadow maps have bigger FOV and square frustum), but to make sure everything is identical as far as possible, I gave the shadow maps the same window dimensions and FOV as the camera view.

I cannot tell why shadow map depth values are too large though.

You can tell a lot from “how” it disappears. Does it pop off? Does it look like the object is gradually being “sliced away” to nothing? If the former, then you’ve probably stopped culling into your shadow map draw pass. If the latter, then you’ve probably computed your shadow light-space near and/or far clip planes wrong when rendering the shadow map, or have your transforms wrong when applying them.

You can more easily see what you’re doing wrong if you implement a debug mode where you render the shadow map onto the window, with black = near value and white = far value. For instance:

http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/

Look at the picture but ignore the shader math – think when I last traced through it it wasn’t right. You can use what you already do know to compute eye-space Z from the shadow map and render that.

Oops. Yeah, you’re right of course. Was thinking about the whole frustum, not just the projection. My bad.

In addition FOV and symmetric perspective needing to be the same, you’d also have to have the same near and far clip in your light and eye frustums to have the same projection, and I wouldn’t think that’s a given. The behavior you’re describing sounds like your light space near plane might be “slicing away” the shadow caster to where it fails to land in the shadow map. Rendering that debug view of the shadow map will show you much more clearly what’s going on.

The latter happens. It looks like the shadow by and by moves through the solid geometry and disappears behind it (so to speak) as the shadow caster approaches it.

Z near and Z far never change in my application. I’ve got them fixed at 1.0 and 5000.0.

Z near and Z far are mapped to 0.0 and 1.0 respectively during depth calculation. So the depth buffer contents automatically is somewhere between 0.0 and 1.0. I have been rendering the shadow map already, but you cannot tell whether it’s a bit off. (Or I have completely failed to understand you.)

I wouldn’t know how to screw up the transforms. After rendering, I simply read the OpenGL matrices. To invert them, I am using the inversion function code from MESA. Of course, if something’s wrong there, it would explain the problems. Is there a way to have OpenGL invert the matrices and then read them? Hm … Google might be my friend here … no, it isn’t.

If there are numerical differences between floating point handling on a GPU and the Intel/AMD FPUs I’d be in trouble.

Does OpenGL natively store the matrices as double or float?

By the way, sounds like above you were a little sketchy on the space transforms involved. If still a bit sketchy, at this URL:

http://www.paulsprojects.net/tutorials/smt/smt.html

there is a good diagram of where your starting, where you’re going, and how you get there:

If still problems, post the latest version of your shader code. It’s not clear whether you’ve modified since you last posted.

And does your shadow caster ever get within 1 unit of distance of the light source? (If so, …ouch.)

uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
uniform sampler2D shadowMap;
uniform mat4 projectionInverse;

#define ZNEAR 1.0
#define ZFAR 5000.0
#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE (-C / (A + D))

void main()
{
float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;
vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;
vec4 clipPos;
clipPos.w = -ZEYE;
clipPos.xyz = ndcPos * clipPos.w;
vec4 lightClipPos = gl_TextureMatrix [2] * eyePos;
float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;
float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}

gl_TextureMatrix [2] contains light projection * light model view * inverse camera model view * inverse camera projection. That’s how I directly get from camera to light clip coordinates.
I am rendering scene and shadow map as fullscreen quads, hence the usage of gl_TexCoord [0].

Ok, that shader shouldn’t even compile. See where I’ve inserted “and magic happens”. We computed clipPos. Then in the next line it uses eyePos. However, from your gl_TextureMatrix[2] description, “camera clip coordinates” (clipPos) is actually what you want here.

Also, just to clarify terminology, this transform should be:

gl_TextureMatrix [2] = NDC-to-window-space-matrix * light projection * light viewing * inverse camera viewing * inverse camera projection

There are no object coordinates involved here, and thus no modeling transforms. Peeling off the transforms in reverse order, here are the spaces we start at and bounce through with each successive transform:

camera CLIP-SPACE coordinates ->
camera EYE-SPACE coordinates ->
WORLD-SPACE coordinates ->
light EYE-SPACE coordinates ->
light CLIP-SPACE coordinates ->
light WINDOW-SPACE coordinates

And after being explicit about that, I think I see one problem. You didn’t say you included the (-1…1) -> (0…1) “NDC-to-window-space” matrix in your gl_TextureMatrix[2], so with the texture2DProj (which does the .w divide to take your clip coords to NDC coords), you’d be looking up into the shadow map with -1…1 NDC texcoords. That’s not right. You need to look up int 0…1 window-space texcoords. Since you said you’re seeing reasonable shadows except when you move too close, I have to assume that you included this matrix, but didn’t mention it (?)

Note that this matrix scales not only X and Y but Z as well, to position your depths in the 0…1 range for proper shadow map comparisons. If you’re missing this, I could see where you might see some strange depth comparison results possibly explaining what you’re seeing.

See “scale and bias matrix” here:

http://en.wikipedia.org/wiki/Shadow_mapping

for what I’m talking about with the “NDC-to-window-space” matrix.

Also, another bug. And the bigger one. This dovetails from my last comment, which comes from not being very deliberate about what space you’re in.

In the “fragDepth < shadowDepth” line, you’re comparing depth values in two different spaces!!! That’s a big problem. fragDepth is a “camera WINDOW-SPACE” depth value (0…1). And shadowDepth is a “light WINDOW-SPACE” depth value (0…1). These aren’t the same space, so this comparison is nonsensical.

What you need to do is add that “scale and bias” matrix (i.e. NDC-to-window-space" matrix) to your transform chain. After multiplying by gl_TextureMatrix[2], this’ll give you a “light WINDOW-SPACE” x, y, and z (depth) value. And instead of doing “fragDepth < shadowDepth” as your test, you do “lightWinPos.z < shadowDepth”. That is:


vec4 lightWinPos = gl_TextureMatrix [2] * clipPos;
float shadowDepth = texture2DProj (shadowMap, lightWinPos.xyw).r;
float light = 0.25 + ((lightWinPos.z < shadowDepth) ? 0.75 : 0.0);

It is for exactly this reason (easy to get confused) that when I’m passing positions and normals around, I always use the convention pos_<space> or normal_<space> for the variable identifiers so I can keep it straight what space they’re in. For instance:

pos_win (implicitly camera frame relative)
pos_clip (" " “)
pos_eye (” " “)
pos_lt_eye (now light frame relative)
pos_lt_clip (” " “)
pos_lt_win (” " ")

Much harder to trip up this way, and even if you do, much easier to spot errors when tracing the code.

It was 2:30 am for me when I posted this, and I’ve got a particular bad case of influenza.

uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
uniform sampler2D shadowMap;
uniform mat4 projectionInverse;

#define ZNEAR 1.0
#define ZFAR 5000.0
#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE (-C / (A + D))

void main()
{
float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;
vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;
vec4 cameraClipPos;
cameraClipPos.w = -ZEYE;
cameraClipPos.xyz = ndcPos * cameraClipPos.w;
vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;
float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;
float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}

Here is the magic:

gl_TextureMatrix [2] contains bias * light projection * light model view * inverse camera model view * inverse camera projection. That’s how I directly get from camera to light clip coordinates.

I forgot to mention the bias matrix in my erroneous post, but it is involved.

vec4 lightWinPos = gl_TextureMatrix [2] * clipPos;

As Alphonse has pointed out, that variable name is misleading, as after the multiplication with gl_TextureMatrix [2] you are having the light’s clip coordinate, not the window coordinate.

I was wrong about lightWinPos. Since the bias matrix has been applied, gl_TextureMatrix [2] * cameraClipPos does indeed yield the light window space coordinate.

Ouch! Sorry to hear that. :frowning:

Here is the magic: gl_TextureMatrix [2] contains bias * light projection * light model view * inverse camera model view * inverse camera projection. That’s how I directly get from camera to light clip coordinates.

I forgot to mention the bias matrix in my erroneous post, but it is involved.

Ok. As you mostly said in your second post above, the bias matrix actually takes 4D light CLIP-SPACE to a 4D light WINDOW-SPACE. Then when you do the perspective divide, you’re in a 3D light WINDOW-SPACE.

Which also highlight’s another important point: that being, instead of your shadow comparison being: “(lightWinPos.z < shadowDepth)”, it probably needs to be:

(lightWinPos.z/lightWinPos.w < shadowDepth)

so you’re comparing 3D light WINDOW-SPACE depth to 3D light WINDOW-SPACE depth.

Or, to do the same thing in a slightly different form which gets rid of the potential divide-by-zero and ugly denormalized numbers creeping into your math:

(lightWinPos.z < lightWinPos.w * shadowDepth)

Intuitively that makes sense.

Pardon my not being crystal clear here and iteratively working with you toward the solution as I haven’t actually implemented point light source shadows, just directional light source shadows (where there is no perspective involved in the light projection).

I just mentioned that to explain why I am making such stupid mistakes. It’s hard to think when you’re tired and your head hurts. :slight_smile:

Thank you very much for your help so far.

I am not comparing lightWinPos.z with shadowDepth, but the depth value from the corresponding scene buffer fragment though. The big question for me is why that doesn’t seem to work right.

Edit:

Your comments have led me on the right track. This shader does the trick (bias not in gl_TextureMatrix [2]):


uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
uniform sampler2D shadowMap;

#define ZNEAR 1.0
#define ZFAR 5000.0
#define A 5001.0 //(ZNEAR + ZFAR)
#define B 4999.0 //(ZNEAR - ZFAR)
#define C 10000.0 //(2.0 * ZNEAR * ZFAR)
#define D (cameraNDC.z * B)

#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))
void main() {
float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;
vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;
vec4 cameraClipPos;
cameraClipPos.w = -ZEYE;
cameraClipPos.xyz = cameraNDC * cameraClipPos.w;
vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;
vec3 lightNDC = (lightClipPos.xyz / lightClipPos.w) * 0.5 + 0.5;
float shadowDepth = texture2D (shadowMap, lightNDC).r;
float light = 0.25 + ((lightNDC < shadowDepth) ? 0.75 : 0.0);
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}

My suspicion is that texture2DProj (which I had used before) hasn’t worked right (probably because I haven’t setup texture generation beforehand?)

Good deal! That’s lookin pretty good! And it addresses most of my concerns. Ignoring optimizations, just a couple things you might consider tweaking for readability and correctness:

  1. “texture2D( shadowMap, lightNDC )” I’d use lightNDC.xy to be explicit. Frankly, I’m not sure how it compiles as-is, since texture2D only accepts a vec2…

  2. “lightNDC”. That variable actually contains light WINDOW-SPACE coords, since you roll the 0.5/0.5 bias into it. So I might name it lightWin.

  3. “(lightNDC < shadowDepth)”. I’d use “(lightNDC.z < shadowDepth)” to be explicit. And frankly I’m a bit surprised that it even compiles without that.

  4. “(lightClipPos.xyz / lightClipPos.w)”. This does take you to light NDC-SPACE. But what about the case where the fragment position is at z=0 in light EYE-SPACE? That is, it’s in the plane of the light source? There dividing by lightClipPos.w will give you a divide by zero, introducing a nasty denormalized number in your shader and/or causing all hell to break lose with your math. Probably a good idea to protect against that. See my previous post for one method for doing this.

  5. You might consider turning on depth comparisons for your texture, using a sampler2DShadow instead of sampler2D, and doing the lookup with a shadow2D (or shadow2DProj) and let the hardware do the depth comparison for you! With this you can get PCF filtering of your shadow lookups for free on some hardware merely by setting LINEAR filtering on the depth texture!

My suspicion is that texture2DProj (which I had used before) hasn’t worked right (probably because I haven’t setup texture generation beforehand?)

I don’t think so. texture2DProj has nothing to do with texgen (texture coordinate generation), and in fact when you plug in your own shaders you’ve effectively disabled the built-in texgen.

Functionally, texture2DProj is simple: texture2DProj = texture2D, but with texcoord.xyz divided by texcoord.w internally (if you pass it a vec4 texcoord) before the texture lookup (and comparison, if enabled) is performed.

I “think” the only reason why texture2DProj exists is because there at least used to be (maybe still is) dedicated hardware in the GPU to do the divide for you if you wanted it, so it “might” be a little faster to use texture2DProj than to use texture2D and do your own divide in the shader.

I was using lightNDC.z in the comparison. The GLSL compiler automatically casts non-vec2 parameters to texture2D to vec2 and issues a warning about it. I only see the warnings when I actually have errors in a shader and examine the compiler output.

I think that I will still get in hot water when z == 0.0 because I have to divide lightClipPos.xy by it to access the shadow depth.

I think lightWinPos should have xy scaled with the actual viewport dimensions. So my ndcPOS is somewhere between true NDC and window coordinates, isn’t it?

shadow2DProj compares against the depth from the frame buffer, doesn’t it? And that doesn’t work for me.

When I am where I want to get with my shadow mapping, I will blur the shadow maps with a Gaussian blur shader which I expect to look better than PCF generated ones.