Shadow Map with non static models ?

In all the shadow map demos I’ve seen, the code grabs the ModelView and projection matrices just before rendering a static scene from the light POV. These are used to construct an eye-to-light space transformation which is applied to the texture matrix. Tex coords are just generated from vertex coords etc etc.

BUT, what if you have a moving object in the scene i.e. the model part of the ModelView is changing ? There is no one place you can grab the ModelView matrix.

Am I correct ? If I am the following options spring to mind:

When rendering the scene from the Light POV, grab and invert and ModelView just after specifying the view (before any model transformations have happened). Then in a vertex shader when rendering from the eye POV, apply the eye ModelView matrix, the inverse View and the regular LightView ??

Or, Maintain separate Model and View matrices - grab the light view matrix as before but also grag the eye view matrix while it’s only got the view in it. The load identity into the ModelView so that it will only contain the model transforms that are applied. Then in the vertex prog do the model and view transformations seperately to generate both the eye space transform and the light space transform ???

Rob

Rob, I don’t really see a problem. As you say, the TexGen required for shadow mapping is only dependent on your view, not on any model transformations. But shouldn’t you already have easy access to the “View” matrix without the “Model” part?

Have a look at the code for my shadow mapping demo on www.delphi3d.net if you want. I store the camera position and orientation as a set of world-space XYZ vectors that I can feed straight into gluLookAt(). I also use gluLookAt() when setting up my texture matrix for the shadow maps, and adding extra model transformations in the DrawFloor(), DrawTeapot() or DrawDonut() functions does not break the demo.

In “real-world” apps, you would have a camera class that can directly give you the view matrix without having to call gluLookAt() or other GL transformation routines, so you can just glMultMatrix() it without extra overhead.

Let me know if I explained it badly

– Tom

Hi Tom,

I’m doing all the texgen in a vertex prog, so I’m trying to replicate what glTexGen does.

Have a look at page 41 of http://developer.nvidia.com/docs/IO/1446/ATT/ShadowMaps_CEDEC_E.pdf

‘glTexgen automatically applies this [Inverse eye view matrix] when the modelview matrix contains just the eye view transform.’

What glTexgen does is grab the Inverse Eye view and applies this after the Eye ModelView transform. This just leaves the model bit of the transform in place. This is then transformed by the light view and projection. It’s actually backing out the eye view step.

What i’m thinking is that because I have complete freedom in my vertex shader I can get rid of this ‘backing out’ step by maintaining seperate Model and Eye View matrices. I can then just do

vertex->model mat->light view->light projection

to get the texcoord and

vertex->model mat->eye view->eye projection

to get my regular projected coord.

Rob.

[This message has been edited by pocketmoon (edited 09-26-2002).]

Hi Pocketmoon,

The rationale for using eye-linear texgen is that you need to transform vertexes into eye space for other operations, and it is a “reasonable” unified space (doesn’t vary per-object). If you went the approach of having separate model and eye_view and light_view matrices, then you would have to transform positions 3 times instead of 2.

Of course, there’s the possibility that you don’t need eye space position in your vertex program, which would leave room for a different formulation, but it’s unlikely that you’ll be able to get it below 2 transforms unless you concatenate them on on the CPU, which can be annoying.

Thanks -
Cass