Shadow Map Projection problem in fragment/vertex program

Hi!

I’m having problems w/ shadow mapping inside of vertex/ fragment programs. I’ve generated the depth values in a pbuffer, and I can project from the camera, but I can’t get it to project from the light.

I pass the result of this matrix multiplication into the fragment program:

texMat = biasScaleMatrix * lightProj * lightModelView * cameraModelViewInverse;

where biasScaleMatrix is:
[ .5 0 0 0 ]
[ 0 .5 0 0 ]
[ 0 0 .5 0 ]
[ .5 .5 .5 1 ]

and cameraModelViewInverse is the inverse of a 4x4 matrix generated by
gluLookAt( from.x, from.y, from.z, at.x, at.y, at.z, up.x, up.y, up.z);

I’m using TXP command to do the texture projection.

I’ve seen some messages where people have included the dimensions of the depth texture, but I haven’t seen anything on how to properly include the data.

Any help would be greatly appreciated!

Mike

What do you exactly mean ? What you said isn’t enough (at least for me).

I think that I am doing something wrong with the matricies. I looked at a number of sources to determine the correct order and matricies to use to project the depth map from the light’s position. However, all of the sources used EYE tex coord generation.

I want to avoid using tex coord generation.

What I need to know is the ‘texMat’ that I complied sufficient to transform to light space?

Are there additional operations that I’m missing?

Thanks,
Mike

What are you multiplying with that matrix?

I’m multiplying it by the ‘fragment.position’ variable. ( It is the eye space vertex position, right?)

Sometimes, the fovy could end up with bad results or nothing at all. So ensure your light perspective fits with what you’re doing.

Originally posted by Perdedork:
I’m multiplying it by the ‘fragment.position’ variable. ( It is the eye space vertex position, right?)
Actualy no. This are window coordinates of the fragment center.

If a fragment attribute binding matches “fragment.position”, the “x” and “y” components of the fragment attribute variable are filled with the (x,y) window coordinates of the fragment center, relative to the lower left corner of the window. The “z” component is filled with the fragment’s z window coordinate. This z window coordinate undergoes an implied conversion to floating point. This conversion must leave the values 0 and 1 invariant. The “w” component is
filled with the reciprocal of the fragment’s clip w coordinate.

Hmm… I’ve tried doing the transformation in both the vertex and fragment program. I’ve checked the matrix math w/ a graphic calculator to make sure there aren’t problems with the ordering, operations, or inverse.

Right now I get one of two different results:

A ‘shadow’ that changes with the position of the camera AND light, and seems to creep from the bottom or top. It doesn’t seem to relate to the geometry of the scene…

Z-Buffer fragmentation that doesn’t seem to get better with biasing or scaling, along with the same issue as above.

The things I suspect could be wrong are one of the following:

I need to scale/bias the result of the texMatrix by the dimensions of the depth texture.

There are additional operations done by the openGL texture coordinate generation that I haven’t included.

NOTE: I have been able to project the texture into the scene from the camera’s viewpoint by moving the screen width into fragment.position.w before the TXP command. - BUT, when I try to apply the texMat transformation, it doesn’t work…

If anyone could help me with this ( or include a description of the proper steps for shadow mapping in vertex/fragment programs) I’d really appreciate it!

Thanks,
Mike


I need to scale/bias the result of the texMatrix by the dimensions of the depth texture.

Unless you shadowmaps use one from *_texture_rectangle extensions (if that combination is supported), the matrix is independent on texture dimentions.

NOTE: I have been able to project the texture into the scene from the camera’s viewpoint by moving the screen width into fragment.position.w before the TXP command. - BUT, when I try to apply the texMat transformation, it doesn’t work…

Forget the fragment.position, it is not what you want. Your projection from camera’s viewpoint appears to work because fragment.position.xy contains pixel coordinates so it is ( 0,0 ) in left bottom corner and ( width, height ) in right top corner and division by screen width results in (1, height / widht ) in right top corner.

If you need eye space position in fragment program you have to send it manually trouch some interpolator.

If I remember correctly, current nVidia SDK contains example of shadowmaps in OpenGL so you can look at it.

Can you post source of your shaders?

Sure… The shader I use to place the depth into the color buffer is this:

!!ARBfp1.0

TEMP dist;

MUL dist, fragment.position.z, fragment.position.w;
MUL result.color,dist, 100;

END

The vertex program for drawing the scene is this one: ( I’ve taken out all of the operations that don’t deal with lighting, bump/spec/paralax mapping to make it easier)

!!ARBvp1.0

OUTPUT smTexCoord = result.texcoord[5];
PARAM mvp[4]= {state.matrix.mvp};
PARAM mv[4] = {state.matrix.modelview};

DP4 result.position.x,vertex.position,mvp[0];
DP4 result.position.y,vertex.position,mvp[1];
DP4 result.position.z,vertex.position,mvp[2];
DP4 result.position.w,vertex.position,mvp[3];

DP4 smTexCoord.x,vertex.position,mv[0];
DP4 smTexCoord.y,vertex.position,mv[1];
DP4 smTexCoord.z,vertex.position,mv[2];
DP4 smTexCoord.w,vertex.position,mv[3];

END

The fragment program for renering w/ shadow map is: (‘smMatR*’) are the columns of the texture matrix that I compute outside of the fragment program.)

!!ARBfp1.0

PARAM smMatR1 = program.local[3];
PARAM smMatR2 = program.local[4];
PARAM smMatR3 = program.local[5];
PARAM smMatR4 = program.local[6];

ATTRIB smTexCoord = fragment.texcoord[5];
TEMP shadowDist, dist;

DP4 tempResult.x, smMatR1,smTexCoord;
DP4 tempResult.y, smMatR2,smTexCoord;
DP4 tempResult.z, smMatR3,smTexCoord;
DP4 tempResult.w, smMatR4,smTexCoord;

#texture[4] is the PBuffer depth tex
TXP shadowDist, tempResult, texture[4], 2D;

MUL dist,fragment.position.z, fragment.position.w;
MAD dist, dist, 100, .1;
SGE shadowDist, dist, shadowDist;

…etc…

The texture matrix is:
texMat = scaleBias * lightProj * lightMV * modelViewInv;

Where the scaleBias is
[ .5 0 0 0 ]
[ 0 .5 0 0 ]
[ 0 0 .5 0 ]
[ .5 .5 .5 1 ]

If it would help, I will post how I compute all of the matricies, and/or example values for various positions.

Thanks,
Mike

If your matrix is correct the projection should work however it is much more complicated than it needs to be. Your initial vertex positions are in object space so the initial multiply by MV matrix just to multiply the result by inverse of MV in next step is not necessary so you can skip that part. Also unless you have another use for eye space in fragment program, you can calculate entire matrix multiplication in vertex program and send only what currently is in tempResult to the fragment program.

I assume that you are using floating point textures to store the shadowmap. While I am not fully sure, I believe that the fragment.position.z is value after the perspective division so multiplication with fragment.position.w is not necessary to get the depth and you apply the perspective division twice (because the “w” component of fragment.position is filled with the reciprocal of the fragment’s clip w coordinate)

To me it seems that your problem is calculation of the dist variable which is then compared with the shadowDist. Problem is that fragment.position.z contains distance from the camera however what you need during comparison is distance from the light otherwise the comparison is nonsense.

EDIT: It is likely that your projection works perfectly and only the incorrect comparison makes it look wrong.

After you mentioned the distance to camera -vs- the distance to the light, I smacked myself in the head… I can’t beleive I made that mistake.

After I fixed it to check the light distance, I got an even stranger problem:

I know the pbuffer depth texture has the same values for Red, Green, and Blue, however, as the light rotates, I either get nothing, or a beautiful spread of the primary color overlayed against the scene.

As the light rotates, at first I will get nothing, then it appears like colored lights are being projected against the scene.

I guessed that it may be the SGE doesn’t have all the values set, so the result is coming out something along the line of [0,1,1] or [1,0,1] etc. But that wouldn’t explain why large portions are covered.

I’ll try to post a screenshot later.

The screenshot will be helpfull. Can you also post parts of shader relevant to color calcuation in your case?
Probably the best course of actions would be to output partial results of the calculation and see if they look as they should.

Actually, the color problem was me trying to code by ‘cut and paste’. I put the wrong variable in the distace function, so it certain cases, it would project color over portions of the scene.

Now the problem is related to how the depth texture is being projected to the scene:

In one image I made, you can see a light spot on the leg. The light spot is actually the character depth from the light view. It seems that the resulting texture coords from the world -> light space -> tex projection need to be magnified.

I don’t have web space to link the pictures from… Do you mind If I e-mail you?

Thanks,
Mike

I was thinking about your scale bias matrix. It seems to me that its translation part should be last column and not last row. This may explain scalling you see because this way the value of w coordinate is increased before the division.

I don’t have web space to link the pictures from… Do you mind If I e-mail you?

I have made my email visible in my profile.

@Komat: How can a 1 at (4, 4) matrix location bias the w-coordinate?

@Perdedork: A good way to debug shadow mapping related problems is to view the light’s view using some sort of color coding. That will at least tell you that your shadow map is being generated properly. Alternatively you could write values of the shadow map to the fragment color. That will project a color onto your screen and give you a reasonably good idea that the depth texture is correct. It will also prove that the vertex transformation from eye space to light’s clipspace (and eventually texture space) is working properly.

Once the depth values (in the color) start being projected properly on the screen. You’ll know that the transformations are correct and the shadow map is being generated properly as well. The distance comparison from there onwards is trivial.

Optimization Note: Most notable are dist and tempResult being calculated per-fragment. You can use interpolators and calculate these values per-vertex.

@Komat: How can a 1 at (4, 4) matrix location bias the w-coordinate?

It can not. The nonzero values in other columns of last row can do that.