shadow for static mesh and static lights

For this kind of shadowing, I’m actually used to implement them using shadow mapping. However I’m convinced that I could optimize it a lot. But at this time I didn’t found out how to do that optimization.

In fact I would have liked to store the texture coordinates once for all. But the way I do them (and I think the way it’s got to be) requires texture coordinates auto-generation by GL.

Is there a way to get those texture coordinates back once generated ? Is there any other ways in order to avoid to do the same calculations for each frame ?

Read about OpenGL’s feedback mode - it can be very usefull for lots of things.

what youre asking for doesnt make much sense, the meshes texture coordates are not necessary for shadowmapping.

perhaps u mean
in the verext shader
projected_coords = lightmatrix * gl_Vertex;

and in the fragment shader
shadow2DProj( shadow_texture, projected_coords )

i suppose if the lightmatrix doesnt chaange u could calcu;late it yourself and pass it to the program as a vec4 but i dont think its gonna buy u much performance (if any) since mat * vec operations are fast

I don’t see where it doesn’t make much sense, so I’ll try to explain it better.

I actually don’t do it in shaders at all. What I do is getting the light matrices (projection and modelview), render my scene, and I enable texture coordinates generation for s,t,r and q. After that I do some operations in the matrix mode and render my scene again.

What I would have liked to do is to calculate and get back the texture coordinates just once so that I could do the texture matrix operations also just once and finally in order to give the texture coordinates for my scene in a special texture unit using the depth texture image.

To my point of view it will definately be much faster: no GL texture generations, no matrix manipulation…

OpenGL doesn’t do magic. You can do the calculations on the CPU, and store the texturecoordinates statically. If you don’t know the necessary maths, and can’t find a textbook for it, you might take a look at the parts of the OpenGL-spec that you use currently. Otherwise, ask specific questions about the parts you’re uncertain about (you probably only need to construct a perspective projection matrix and a lookat-matrix).

You might also consider calculating a single set of texturecoordinates to combine all the shadowmaps into a single lightmap. This should speed up rendering.

Lightmaps are very fast - instead of using separate shadowmap from each light source and computing lighting, you can use one lightmap for all light sources and do not use any lighting at all.
But if you insist on using shadowmaps, then storing precomputed shadowmap coordinates will probably not speed-up your application. It’s an additional array of data stored in memory that need to be accessed. It may turn out that texture coordinate generation provided by (dedicated) hardware works equally fast as accesing such array of coordinates. On latest hardware having mmore powerfull vertex processors it’s very likely to happen.

So if you want to gain some speed go for the lightmaps. You will need to precompute these, but rendering will become very easy and very fast.

If you still insist on shadowmaps then try to create a list of polygons (for each light source) that are visible from the light source.
You may also consider using scissor test, depth bounds test, early z-out or any other that you can come up with. You may also try to use alpha-test to discard pixels that are in shadow (if pixel is black then there is no need to add it to framebuffer, right?).

I do not insist on using shadow map, I did that request to get some light sheded: it was just a thing that came in my mind. As you stippled, I could do all the calculations for the texcoords myself but as GL does this perfectly… I guess I’ll be able to do the maths myself, and if I’m not wrong this involves some normal bi-tangent space.

Indeed I never went into lightmaps, I recently wanted but had other things to do.

Thanks for your answers.

jide what i wrote above even though its glsl it applies to the fixed pipeline as well.
the problem with lightmaps is they dont cast shadows onto anything, eg if a building is casting a lightmap onto the ground a player who walks into that area wont receive the shadow. thus u havta employ lightmapping + something else.

I guess I’ll be able to do the maths myself

And why not use the feedback mode for it?
If you allready have shadow maps implemented then just use your code and see what OpenGL generates.

k_szczech : Yes that’s true you already talked about it. I’m gonna have a look at the feedback mode which I never used.

zed : I haven’t implemented shadow mapping elsewhere but in simple test programs. But I was beleiving that ‘a player who came to walk into the shadow will receive that shadow’. I guess it depends on how you cast the shadow, am I wrong ? At least, on my tests, the object casting the shadow also receive the shadow. But this might be because I do them using 3 passes: one from the light pov, one rendering the scene (normally) and one that render all the scene with the shadow (so I presume all the scene, even players will receive shadow because all the scene was texture coordinates generated and bound to the shadow map).
But maybe I’m wrong somewhere.

“zed : I haven’t implemented shadow mapping elsewhere but in simple test programs. But I was beleiving that ‘a player who came to walk into the shadow will receive that shadow’.”
this is true i think u misunderstand me, i was talking about lightmapping, i was pointing out though if u do lightmaps for static lights + objects, its not gonna work when another thing eg a person comes into the shadow area

Yes I misread what you wrote in your previous post.