Shader Problem

Dear All,
I have a Cg shader, which I’m trying to get to do shadow mapping. I know it’s not the most efficent thing in the world, but I’ll fix that after it works properly, which it is currently failing to do in a big way(http://laeuchli.com/jesse/shadowerror.PNG). So, the vertex shader is:

VertO Out;
float4 tempos;
Out.pos = mul(ModelViewProj,In.pos); 
tempos.xyz=In.pos.xyz;
tempos.w=1;
float4 tex4scale=mul(TextureMatrix,tempos);
tex4scale=tex4scale*.5+.5;
Out.tex4=tex4scale;

Texture matrix is my lightprojection*lightmodelview. I have tested this by changing

Out.pos = mul(ModelViewProj,In.pos); 

to

Out.pos = mul(TextureMatrix,In.pos); 

and I get the scene from the lightPOV.
My pixel shader is:

struct Vert
{
float4 pos : POSITION;
...
float4 tex4: TEXCOORD3; 
};

Pix Out;
Vert In;
float4 shadowmap=(tex2Dproj(ShadowMap,In.tex4));
Out.dif=shadowmap.aaaa;

Could someone tell me what I’m doing wrong, or give me a link to a OpenGL Cg shader which does shadow mapping with the RegTex Profile, so I can eyeball that some.
Thanks,
Jesse

I think you’re screwing up your perspective divide by resetting your homogeneous divide by w to 1. That may not be the only issue, just the most obvious. [edit] Infact, now that I look at this. You want to be in eye space. i.e. modelview mult, not after the projection, however I think your reasoning is that setting w to 1 returns you to eyespace. Good luck with that theory. The shadow needs to be texgen’d from the lights and transformed to the appropriate space. Lots of ways to do this but I don’t see the groundwork in your shader code.

One can fake object-linear texgen by using only the texture matrix, so it should do all the projective stuff. You could add the scale and bias into the matrix as well.
Is the matrix-mul correct? Assuming you got column-major matrices and all that, you should multiply the column-vertex with each row in the matrix -> strq, or xyzw (and w=1 seems a bit suspect).
Anyway, I’ve never used Cg, so I dunno what tex2dproj is supposed to do. The texture-projection has already been done on texcoord level so all you need to do is to fetch the texel with the regular texture-magic.

Originally posted by coelurus:

Anyway, I’ve never used Cg, so I dunno what tex2dproj is supposed to do. The texture-projection has already been done on texcoord level so all you need to do is to fetch the texel with the regular texture-magic.

This is not true.

The projection is done by tex2dproj, and there is no texture-magic at all. You can use the w-coord in whatever way you like. The proj variants of the texture functions, TXP(ARBfp), tex2dproj(cg) and texture2DProj(glsl), are there to enable the usage of hardware specialized to perform this division.

You want to be in eye space. i.e. modelview mult, not after the projection, however I think your reasoning is that setting w to 1 returns you to eyespace.
All the examples I’ve seen multiply by the light projection matrix as well as the light’s modelview matrix. Indeed, if you don’t have a projection matrix, the tex chords aren’t even going to be in the range of [-1,1], so I don’t understand how I could do without the projection matrix. Could you explain this a bit?

Is the matrix-mul correct

I’m pretty sure the matrix mul is correct, because as mentioned, when I use it instead of the modelview matrix to transform my vertexes, I get the scene rendered from the Light’s POV.
I understand I can move the scaling off GPU with a bias matrix, I just wanted to eliminate a possible source of error.

Does anyone know of an OpenGL Reg/tex Cg shadow mapping example? I’ve looked at the DX9 ones, and tried to follow them closely, but it isn’t working.

This is what I’m seeing, given your texture matrix.

Vertex:

VertO Out;
Out.pos = mul(ModelViewProj,In.pos); 
Out.tex4 = mul(TextureMatrix, In.pos);
 
// Scale and bias only if not done in 
// texture matrix (don't forget w here)
Out.tex4.xyz = Out.tex4.xyz * 0.5 + Out.tex4.w * 0.5;

Fragment:

float shadowmap = tex2Dproj(ShadowMap,In.tex4).x;

Hmm, I added that, but it still doesn’t work. I’m not already scaling in my matrix.
Jesse

You need to post the setup code, in addition to anything that affects these matrices, textures, states, … . Otherwise, it’s just a guessing game; there are too many possible sources of errors for that. Everyone has pointed out the visible errors; the unseen ones are all that remain.

It’s very important to understand why this technique works. Once you do, the details are trivial, and they make perfect sense. All you are doing is rendering 2 views of the scene simultaneously. The first view, from the camera’s point of view, the second view from the light’s point of view. The only real difference between the 2 is that one is interpolated in the vertex position, the other in a texture coordinate. The GL automatically deals with perspective division and viewport transformations for vertex positions; but for texture coords, you’re on your own. Your earlier code snippet suggests a lack of understanding in this area.

Just think of your Cg programs as rendering the world from the light, as if it were the camera. The only difference is that the result is a texture coordinate, not a vertex position, and as such you need to apply the perspective division and “viewport” transformation yourself. Other than these differences, the light view is identical to the camera view, setup and all.

If you want more demos, I’m sure google should provide plenty. But I can assure you that the method you’re using, when done correctly, works. Honest, I use it myself :slight_smile:

Note that I said the projection was done (perhaps I should’ve said setup) at the texcoord level, which happens with the matrix-mul for every vertex. What’s left is the per-fragment divide and comparison-filter ([s/q, t/q] cmp r/q, isn’t that magic? :smiley: ). I’m no guru when it comes to how gfx cards really work, but as far as I know (exp only with regcoms), projective texture-divide is done for every fragment.

Anyway, a little tip: Project the shadowmap with no cmp-filtering to see easier how the texture is really projected, might help pinpointing the problem a little bit.

I understand the texgen implemented as a matrix from the vertex coordinates. Looking again it seems better than I first thought, for some reason I initially thought that the transformation was taken post modelviewprojection & setting w to 1, (which would missmatch eye space z with typical OpenGL rendering), but you’re not doing this at all. It looks quite reasonable.

There is an issue taking the object space numbers straight into a texgen like this, it requires you to apply any model matrix changes to the texture matrix as they happen, doubling your matrix multiplies. It is much better to take the post modelview transformed verts (not post projection) so you have eyespace numbers, and you apply the inverse viewing matrix via the texture matrix multiplied with the light position transformation. This way you only need to setup the texture matrix once per light and it will work as you apply changing model transforms to the modelview matrix.

So are you adjusting the texture matrix each time you multiply a new transform on the modelview? If not you need to and if you don’t want to then try taking the vertex input multiplied by the modelview and send it for the texture multiply with inverse viewing on the texture matrix instead of model.

I suppose that your approach is cheaper per vertex if the application can elegantly manage the matrix work (and you have sufficient vertices per model matrix change to make it worthwhile).

One last thing, your half bias and move to match the 2D texcoords looks very suspicious to me. You can build this into the texture matrix for free and it can be applied only to s & t axes in light space to map the frustum to texture coordinates, NOT on the whole coordinate. If doing depth compares, matching the depth ramp of the shadow depth map to the projection of the r coordinate accurately by building the equivalent projective texture matrix on the texture stack (that was used for the light) is critical.

This link has an old example where I build the texture matrix with the half bias & scale on the texture matrix that is limited to s & t, then I multiply the frustum of the ‘light’ then the ‘light’ orientation. You’d also want to multiply the model matrix or inverse viewing for anything other than ‘world’ coords (no model xform).

http://www.sgi.com/software/performer/brew/uav.html

There is an issue taking the object space numbers straight into a texgen like this, it requires you to apply any model matrix changes to the texture matrix as they happen, doubling your matrix multiplies. It is much better to take the post modelview transformed verts (not post projection) so you have eyespace numbers, and you apply the inverse viewing matrix via the texture matrix multiplied with the light position transformation. This way you only need to setup the texture matrix once per light and it will work as you apply changing model transforms to the modelview matrix.

So are you adjusting the texture matrix each time you multiply a new transform on the modelview? If not you need to and if you don’t want to then try taking the vertex input multiplied by the modelview and send it for the texture multiply with inverse viewing on the texture matrix instead of model.

I suppose that your approach is cheaper per vertex if the application can elegantly manage the matrix work (and you have sufficient vertices per model matrix change to make it worthwhile).

Hmm, this could be what I’m doing wrong. The only thing is the error doesn’t look like a matter of scale, or off a bit, it looks completly wrong. Still, I’ll follow this up, thanks.

One last thing, your half bias and move to match the 2D texcoords looks very suspicious to me. You can build this into the texture matrix for free and it can be applied only to s & t axes in light space to map the frustum to texture coordinates, NOT on the whole coordinate. If doing depth compares, matching the depth ramp of the shadow depth map to the projection of the r coordinate accurately by building the equivalent projective texture matrix on the texture stack (that was used for the light) is critical.

Thanks, I understand that I can move this into a matrix, I just wanted to do it here to eliminate a possible source of error. I’m not doing the depth compare yet, so that is not part of the problem, but I’ll certainly keep that in mind if I ever get this annoying problem fixed.
Jesse

Just to clarify, I thought it was understood from the beginning that the vertices were in world-space, not object space. This method, as is, expects vertices in world-space. In an earlier thread, you stated that you wanted a method that worked without the use of texgen functionality. I naturally assumed this was an extension of that discussion.

The eye texgen method is better when dealing with object-space coords, as Dorbie pointed out. But ultimately, the objective is to get world-space coordinates, either way you go. So, this method can in fact be better, if you know your coords will be in world-space, as you can save yourself the texgen evaluation.

When you use the eye texgen, all you’re doing is loading the inverse of the camera matrix into the texgen planes (by loading an identity matrix into the eye linear planes when the camera transform is at the top of the modelview stack). Let CM be the modelview matrix separated into the camera matrix C and model matrix M. The vertex transformation is

Veye = C M Vobject

If we multiply both sides by C inverse (our eye texgen matrix), we get

C^-1 Veye = M Vobject = Vworld

So, if your coords are in world-space already, there’s no need for this at all. Otherwise, we need this, or its equivalent, to put us in world-space for the light view texture matrix setup to work.

really for for posting this:
can someone give me some texture ptojection code in glsl? thanks in advance and sorry

Just to clarify, I thought it was understood from the beginning that the vertices were in world-space, not object space. This method, as is, expects vertices in world-space. In an earlier thread, you stated that you wanted a method that worked without the use of texgen functionality. I naturally assumed this was an extension of that discussion.

Thats right, my vertexes are in world space.