Puzzling doubts over texture space for per-pixel lighting...

Hi everyone,

I’ve got quite a few doubts over the use of texture space(ie. S, T and N vectors) for per-pixel lighting…

  1. if each vertex of a triangle has its own texture space, then does it mean each pixel on the triangle should also have its own texture space(obtained by interpolating between the ones at the vertex)?

  2. do we interpolate the L or H vectors between the ones at the vertex in eye space, normalize them, then transform them into model space and finally into the texture space of that particular pixel to be shaded…before performing the lighting calculations?

  3. do we need to use the inverse of the texture space matrix in order to transform the L or H vectors(already defined in model space) into the texture space? (I reckoned that since the texture space vectors are defined in model space, that means the resulting 3x3 matrix will transform vectors in the texture space to model space and not vice versa - which means we need to use its inverse if we want to transform from model space to texture space…is this correct?)

I’ve been reading the nVidia docs on per-pixel shading and I must say the details are sketchy at best…which explains why I ended up with more questions than when I started.

Confused OpenGL coder.

1: Yes, technically, you should be interpolating the tangent vectors and doing the transformation per-pixel if you want the most accurate illumination. And you can if you’re using a GeForce3+/Radeon8500, you can actually do it. However, interpolating the transformed vertex looks reasonably good anyway.

  1. The sequence is:
    a) Transform L/H vector into tangent space.
    b) Interpolate tangent space light vector.
    C) Re-normalize light vector (if you want).
    D) Perform per-pixel dot product.

  2. Implement it the way the paper says. If things come out wrong, try transposing (ie, inverting) the tangent space matrx. However, I bet the way they specify how the 3 vectors go into the matrix specifies the correct one.

[This message has been edited by Korval (edited 03-01-2002).]

Texture space is texture space. But you are right, it’s vectors are relative to the object space.

So you need to transform the light vector in object space, then transform this vector to tangent space.

Store the vectors in the texture coordinates and use cube renormalization to normalize your vector before the dot product.

  1. The sequence is:
    a) Transform L/H vector into tangent space.
    b) Interpolate tangent space light vector.
    =>ok, suppose I transform the L/H vector at vertex 1 into the tangent space of vertex 1 and the L/H vector at vertex 2 into the tangent space of vertex 2…and now try to interpolate between the 2 vectors - but the question is, how do I interpolate between 2 vectors that are defined in different spaces? (I thought we can do this only if the 2 vectors were in the same space? On the other hand, if the 2nd L/H vector were transformed into the tangent space of vertex 1 as well, then interpolating would make sense since both vectors are now in the same tangent space)

C) Re-normalize light vector (if you want).
D) Perform per-pixel dot product.

  1. Implement it the way the paper says. If things come out wrong, try transposing (ie, inverting) the tangent space matrx. However, I bet the way they specify how the 3 vectors go into the matrix specifies the correct one.
    =>hmm, I suspected that as well…which leads me to 1 question - does OpenGL use row-major matrices?

Perhaps a simple OpenGL code snippet illustrating the above sequence would go far in shedding more light(particularly showing how register combiner code is actually used to implement all the steps a, b, c, d)?
Could someone be so kind as to spare a few minutes for this? (I’d been poring through the nVidia articles, docs and demos for a simple code snippet that does this but to no avail…it’s really hard to get to grips with this stuff when the documentation is so spotty)

A million thanks in advance!

Tangents and binormals are supposed to be created from a continuous function. And we can assume that it changes smoothly over a triangle(because a triangle really is approximating a curved surface) and so interpolating the light vector is correct.

Opengl Uses Column major matrices

Step a is done in software or in a vertex shader.
Step b is done by our good friend Opengl
Step c is done by texture lookup when fetching in the cube map
Step d is done in a register combiner or simple TexEnv.

here is an example in pseudo-code.

//per-pixel setup assuming you have 2 tex unit available for you
//first pass
//tex unit 1 holds the bump map
//tex unit 2 holds the normalization cube map

float* vertices; //all the vertices, GIVEN
float* texCoordDiffuse; //all the texCoord for the diffuse and bump map, GIVEN
float* lightVectorsInTangentSpace; //light vector in tangent space //NEED TO COMPUTE

for each vertex
lVector = calculateLightVector();
lVectorObj = transformLightVectorToObjectSpace( lVector)
lVectorTang = transformLightVectorInObjSpaceToTangentSpace(lVectorObj);
lightVectorsInTangentSpace[vertex] = lVectorTang;
EndFor

  //setup vertex arrays
  glVertexPointer( vertices );

  EnableTexUnit1();
  BindBumpMap();
 glTexCoordPtr( texCoordDiffuse );

  EnableTexUnit2();      BindCubeMap();
  //this one has 3 components because it is a vector!
  glTexCoordPtr( 3, lightVectorsInTangentSpace );

  TexEnv( DOT, texunit1, texunit2 );
 Render();  
  //second pass. Modulates the diffuse map
  
  DisableUnit2();
 
  TexEnv( replace );
  BlendFunc( Mult );
  EnableUnit1();
  BindDiffuseMap();
  Render();

Looks funny. The forum erased the QUOTE!
[This message has been edited by Gorg (edited 03-01-2002).]

[This message has been edited by Gorg (edited 03-01-2002).]

[This message has been edited by Gorg (edited 03-01-2002).]

[This message has been edited by Gorg (edited 03-01-2002).]

Right, thanks…I think the picture’s a little clearer now…

Cheers :wink: