Hi. I’m implementing shadow mapping using GLSL and a supplying the transformation to light-clip space via the texture matrix. I fill the texture matrix like so:

Matrix4x4<float> transform = viewport * proj * camera * model * mv_inverse;
glMatrixMode(GL_TEXTURE);
glLoadMatrixf(transform.GetArray());

And I calculate the texture coordinates in the vertex shader with:

Shadows work fine with this setup. However, it seems to me that I should be able to remove the mv_inverse from the texture matrix and the gl_ModelViewMatrix from the texture coordinate calculation since they should resolve to identity. But when I do, the shadows are cast wildly. I’ve confirmed that mv_inverse is the inverse of the modelview matrix at the time of drawing. Can anyone explain why I need to do this extra matrix multiplication?

Can you explain how you ended to this 5 4x4 matrix multiplication to transform object space coordinates to the clip space defined when rendering from the light point of view?

[ul][li]viewport scales and biases into [0, 1] to fit inside the texture and depth ranges.[]proj is a perspective projection matrix that makes [-1, 1] just outside the scene’s extent.[]camera is calculated not unlike gluLookAt with the light position as the camera location and the origin as the focal point.[*]model is the transformation that moves the vertex coordinates into world coordinates, the model without the view. My regular transformation is something like gluLookAt(), glTranslatef(), glTranslatef(), glRotatef(). model contains these last three transformations.[/ul][/li]
From my understanding, I’m just rendering the scene with the light source as the camera. Shouldn’t I essentially just need:

tex_coords = viewport * proj * camera * model * gl_Vertex;

The first four matrices are placed in the texture matrix. But this doesn’t seem to be working. I have to throw in the camera’s modelview matrix and its inverse, even though they should be the identity matrix:

tex_coords = viewport * proj * camera * model * mv_inverse * gl_ModelViewMatrix * gl_Vertex;

if mv_inverse is exactly the inverse of gl_ModelViewMatrix their product is the identity matrix, it can not be anything else.

Are you sure of the contents of all your matrices? log all matrices, get them from opengl with glGetFloatv, one of them should be wrong.
I see that you separate model and view in ‘model’ and ‘camera’ (resp) matrices in your computation but opengl doesn’t make this distinction. It is not necessary and I am pretty sure that ‘camera’ matrix is wrong. This matrix set the eye in (0,0,0) and viewing axis toward negative z axis.

Unfortunately, they are correct. The inverse modelview that I place in the texture matrix is the actual inverse of the modelview matrix as queried at the time I draw the model. I’ve verified my matrix calculation using octave.

The eye is automatically at (0,0,0) and looking down the negative z-axis without any transformation. That OpenGL has a special view matrix is a false idea. gluLookAt just allows us to move immediately into a more common coordinate system. My camera matrix similarly makes a coordinate system centered on the light position in world coordinates and looking at the world’s origin.

Furthermore, if camera were wrong, the shadows probably wouldn’t be appearing correctly when I include mv_inverse * gl_ModelViewMatrix. This inclusion is the only difference between working shadows and non-working shadows.

Many of the tutorials I find on the topic of shadow mapping seem to throw in these extra multiplications. I cannot figure out why:

My basic point is that we move the vertex into a coordinate system that we immediately back out of with an inverse transformation. That coordinate system is the view portion of the modelview matrix. This transformation seems pretty unnecessary conceptually, but it seems that all tutorials I find do this transformation and I myself get incorrect shadowing if I don’t do this transformation.

Ah, I think that I did not realize what is in mv_inverse matrix until I read the 2nd link you gave.

I think I understand know why they do that now.
IMO, according to this tutorial mv_inverse contains the matrix inverse of the camera transformation and only this.

After in your shader the gl_ModelViewMatrix contains the camera transformation plus maybe some world transformations to place objects in the scene.

When you render from the light view it is necessary to keep this world transformation. So you have 2 options:

when you render from the light view point set up all world transformations and do the same from the camera view point. This is what i am used to do and why I did not understand your problem at the beginning.

when render from the light view point, don’t set up world transformations but multiply texture matrix by the matrix inverse of the camera transformation (say cameralViewInverse).
Then when render from the camera view point, multiply texture matrix retrieved in the shader by camera modelview matrix. To sum up:

I disagree. The assignment page talks of the modelview matrix, not the view portion of the modelview matrix. Furthermore, the inverse matrix is used to cancel out the entire gl_ModelViewMatrix, since the light has its own modelview matrix that includes everything to bring vertices into world coordinates.

This is what I do too. I just have your lightModelView expanded into camera * model, and most of my matrix multiplication is done on the CPU and loaded into the texture matrix. But my shadows are failing if I don’t also have a seemingly unnecessary multiplication by mv_inverse * gl_ModelViewMatrix.

I disagree. The assignment page talks of the modelview matrix, not the view portion of the modelview matrix. Furthermore, the inverse matrix is used to cancel out the entire gl_ModelViewMatrix, since the light has its own modelview matrix that includes everything to bring vertices into world coordinates.

Of coure they talk of modelview matrix matrix since in Opengl model and view don’t exist separately.
I maintain what I said at the moment, the article is not enough specific about what is setup in the modelview matrix when render from the light point of view:

“Set up the light’s ModelView and Projection matrices just as you would a camera (gluPerspective and gluLookAt)”

They don’t speak about scene transformations, just camera setup.

if modeling transformations would be included in the last matrix, then when multiplying it with modelview matrix (when render from the camera point of view), model matrix from light and view matrix from camera would be multiplied… it is a nonsense in my opinion.

Whatever is included in the modelview matrix, my point is that the inversion is unnecessary. It seems to be unnecessarily carried over from earlier glTexGen implementations of shadow mapping.

And, in happier news, I have discovered the source of my problem. I draw most of my geometry under a certain modelview transformation, but I failed to remember that I draw a plane in untransformed world coordinates. This plane was getting transformed by my model matrix, though it should have been left alone. When I account for this as I set up the texture matrix, my shadows appear correctly on all geometry.

You mean that your geometry vertices coordinates are already specified in world space and not in object space when giving them to opengl, thus you were transforming the geomtry twice?

Half right. I had a plane whose coordinates (as issued by glVertex) were already in world space. That is, the plane was rendered after the view transformation but before any model rotations and translations. My texture matrix used to find projected texture coordinates, however, was the same for both plane and model. For the plane, this involved an extra and incorrect transform from model coordinates to world coordinates.