I was implementing per-pixel lighting with shadows using Omni-Lights (and hence cube shadow maps), and it was working fine. All calculations were done in world space and hence no problems.
But a change in our material system architecture forced me to pass light related parameters via glLightfv(…) and then access them in shader via gl_LightSource[n] build in uniform variable. Since OpenGL automatically transformed the light position to view space, therefore i decided to do all of my calculations in view space. It doesn’t matter as far as per-pixel lighting is concered, but the shadow map lookup (which uses reverse of lightToVertex vector) got screwed up since cube map look ups require lookup vector to be in world space (at least thats what i have heard). Due to that my shadowing calcultions got screwed up . I don’t want to use the inverse transpose of the model view matrix and then do every calculation in world space.
Is there any method, which will let me access the proper cube map texel given the lookup vector in view space? Also, is there any way via which i can force OpenGL to not apply world to eye transform on light position?
Thanks in advance.
Have you thought about passing the light source parameters as uniforms instead of relying on the built-in variables?
For simplicity sake, you can always pass the light position to your shader as a vec4 uniform assuming that the w component of the vector will hold some info about the lighting range or something like that…
In a lot of my demos what I do is supply the vertex shader with enough uniforms to describe my geometry and lighting parameters in world space, therefore no matter what the models transform is, the shadow mapping (especially the self-shadowing) comes up right all the time.
Let me know if you would like to see some binaries and simple source code
Like i said, the world space calculations work great! But a change in the material system has forced me to pass them via build in variables (ever heard abstration ).
Even so, i can pass through uniforms, but using the current architecture, it would be a whole lot inconvenient and ugly, not to mention a little bit inefficient as well.
Thanks for the reply though!
I see what you mean, however I’m not suggesting that you bypass the whole glMaterial thing, I’m only saying that light source position or direction could be transfered through a uniform whereas everything else goes along the fixed functions.
Edit: or maybe you can use one the materials elements other than the position to transmit your light coordinates? I for one almost never touch on the attenuation factors, so that could do it for me
That is an option not a solution . That can be achieved, but what i really want to know is that is there no way that you can have a proper cube map texture lookup using a lookup vector in view space?
Another reason that this can be important is that most work in OpenGL is done in view space!
Thanks for your help though.
sorry to butt it but have cubeshadowmaps become possible to do without me seeing news of it?
i guess not… but bypassing the whole thing with using float textures (or packing depth values into rgba) works quite nicely… just written a little demo using fp cubemap textures and doing shadowmapping with them (naturally it wont run on nv3x hw )
Not an extension, but the whole trick consists in storing light to objects squared distances in a cube map in the first pass, and during the second retrieving that distance and comparing it to the currently calculated one.