I’m getting a bit confused on how calculate the light vector in tangent space basis so that it can be dotted with normal read from a normal map to get perpixel diffue lighting. As far as I know these are the basic steps to be followed

Set World Light Position

Subtract the current vertex’s postion from world Light position

Multiply the reultant with the TBN Matrix to get the light vector in tangent space basis

Dot that vector (or whatever get’s interpolated across vertices) with the normal read from the normal map.

My problem lies with step 2). Suppose the surface I’m trying to light (in this case, a simple quad) has a series of transformations applied to it, do I need to multiply the vertices position with that transformation matrix before I subtract it from the world light position. In other words

I’ve tried both and it looks ok, but I find it somewhat unnatural especially when I move the light around. I assume I’m correct in that I’m using the transformation matrix and not the modelview matrix (which would be affected by calls to functions like glFrustum )

As a parting question, I also needed to make sure of one thing. The diffuse lighting should not depend at all upon where and in what direction the camera is oriented right ?

Thanks for reading this post, I know it’s a quite lame

You have a stage wrong or are missing a stage, but you have the right solution (I think both of your proposals work):

1.5) Transform world light position to object space light position.

The subsequent subtraction uses this object space light position for the light instead of the light’s world space position.

Note that this is the inverse model matrix and not the inverse modelview. The second option is my preference because the alternative is a full model matrix mult per vertex instead of one per object. IT IS CRITICAL that you use only the model matrix portion of the modelview for the light’s transformation to object space, unless the light position is in eye space.

You could at the start transform the light to eye space once per frame and transform to object space using the inverse modelview instead of maintaining a separate model matrixm. Also remember that with a vertex program you can do the calculations using the transformed eyespace vertices so you don’t have to worry about any inverse matrix transforms.

Diffuce lighting is not at all view dependent. However remember that vectors like the view vector need similar treatment if you are doing specular calculations.

transform normal “forward” through tangent base matrix

dot normal with normalized light direction

ii) Classic

transform light to object (multiply by modelview inverse)

subtract vertex from light giving lightDir

sample normal map

transform lightDir “backward” through tangent base matrix

dot normal with normalized light direction

Case II is what most NVIDIA sample code uses, for some reason. However, it requires two matrices (forward and reverse modelview) where case I can use the modelview as-is. Also, in Case II, it’s really hard to do correct reflection mapping, but in Case I, it’s almost trivial.

Thus, I suggest using Case I. If you multiply “backward” in the tangent base doing something like:

Hi everyone, thanks for the replies. Turns out that the mistake was something totally different. The code I had written for reading in the normal map from a texture was swapping the red and blue values ! . Everything else was pretty much correct. Guess it serves me right for not bothering to check the texture code.