normal mapping in whatever space

That’s exactly what I’m unsure of. I honestly have no idea what space they are in. I think that they are in object space, but I’m not sure one bit. If they are in the correct space, therefore making this code correct, then the one other possible error would be in how I compute the tangent and binormal, which I doubt. Here is the code I use to compute T and B from the N I recieved from the modeling program (Maya) I use. The bottom section of the code sets up the vertex buffer objects if it looks strange to anyone. Also, for some reason I can’t post this with the “less than” sign in the text, so I have replace all “less than signs” in the code with the word lessthan. The parameters are a list of each objects vertices and uvs, as well as the total object count and the number of vertices in each object. Vertices are x, y, z and UVs are u, v.

void TangentMatrix::computeMatrix(float **vertices, float **uvs,
int objCount, int *vertCount)
{
float **tangents, binormals;
tangents = new float
[objCount];
binormals = new float
[objCount];

// compute the data
for(int i = 0 ; i lessthan objCount ; i++)
{
	tangents[i] = new float[vertCount[i] * 3];
	binormals[i] = new float[vertCount[i] * 3];
	for(int j = 0 ; j lessthan vertCount[i] * 3; j += 9)
	{
		float deltaX2 = vertices[i][j + 3] - vertices[i][j];
		float deltaY2 = vertices[i][j + 4] - vertices[i][j + 1];
		float deltaZ2 = vertices[i][j + 5] - vertices[i][j + 2];
		float deltaX3 = vertices[i][j + 6] - vertices[i][j];
		float deltaY3 = vertices[i][j + 7] - vertices[i][j + 1];
		float deltaZ3 = vertices[i][j + 8] - vertices[i][j + 2];

		float deltaU2 = uvs[i][((j / 3) * 2) + 2] - uvs[i][(j / 3) * 2];
		float deltaV2 = uvs[i][((j / 3) * 2) + 3] - uvs[i][((j / 3) * 2) + 1];
		float deltaU3 = uvs[i][((j / 3) * 2) + 4] - uvs[i][(j / 3) * 2];
		float deltaV3 = uvs[i][((j / 3) * 2) + 5] - uvs[i][((j / 3) * 2) + 1];

		Vector3 one = Vector3(deltaX2, deltaU2, deltaV2);
		one = one.cross(Vector3(deltaX3, deltaU3, deltaV3));

		Vector3 two = Vector3(deltaY2, deltaU2, deltaV2);
		two = two.cross(Vector3(deltaY3, deltaU3, deltaV3));

		Vector3 three = Vector3(deltaZ2, deltaU2, deltaV2);
		three = three.cross(Vector3(deltaZ3, deltaU3, deltaV3));

		float Tx = 0.0f, Ty = 0.0f, Tz = 0.0f, 
			Bx = 0.0f, By = 0.0f, Bz = 0.0f;
		if(one.x != 0)
		{
			one.normalize();
			Tx = -one.y / one.x;
			Bx = -one.z / one.x;
		}
		if(two.x != 0)
		{
			two.normalize();
			Ty = -two.y / two.x;
			By = -two.z / two.x;
		}
		if(three.x != 0)
		{
			three.normalize();
			Tz = -three.y / three.x;
			Bz = -three.z / three.x;
		}

		Vector3 T = Vector3(Tx, Ty, Tz);
		Vector3 B = Vector3(Bx, By, Bz);
		T.normalize();
		B.normalize();

		tangents[i][j] = T.x;
		tangents[i][j + 1] = T.y;
		tangents[i][j + 2] = T.z;

		tangents[i][j + 3] = T.x;
		tangents[i][j + 4] = T.y;
		tangents[i][j + 5] = T.z;

		tangents[i][j + 6] = T.x;
		tangents[i][j + 7] = T.y;
		tangents[i][j + 8] = T.z;

		binormals[i][j] = B.x;
		binormals[i][j + 1] = B.y;
		binormals[i][j + 2] = B.z;

		binormals[i][j + 3] = B.x;
		binormals[i][j + 4] = B.y;
		binormals[i][j + 5] = B.z;

		binormals[i][j + 6] = B.x;
		binormals[i][j + 7] = B.y;
		binormals[i][j + 8] = B.z;
	}
}

// build the vertex buffer objects
g_uiTangents = new unsigned int[objCount];
g_uiBinormals = new unsigned int[objCount];

for(int i = 0 ; i lessthan objCount ; i++)
{
	glGenBuffersARB(1, &g_uiTangents[i]);
	glBindBufferARB(GL_ARRAY_BUFFER_ARB, g_uiTangents[i]);
	glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertCount[i] * 3 * sizeof(float), tangents[i], GL_STATIC_DRAW_ARB);

	glGenBuffersARB(1, &g_uiBinormals[i]);
	glBindBufferARB(GL_ARRAY_BUFFER_ARB, g_uiBinormals[i]);
	glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertCount[i] * 3 * sizeof(float), binormals[i], GL_STATIC_DRAW_ARB);

	delete[] tangents[i];
	delete[] binormals[i];
}

}

After some messing around with this, I believe PfhorSlayer is correct. The only problem now is that I don’t know how to reverse glulookat properly. Anyone know how to correctly reverse it? Thanks for all your help so far guys.

This still sounds overcomplicated to me. Why not define your light as a glLight? Then your light position will be delivered to your shader in view space instead of model space. That will allow you to transform the light position to model space with the inverse modelview matrix, and you won’t have to mess with world space at all.

Tried that and it didn’t change anything.

Here’s yet another way to do it:

First, in the vertex program

...
 
// Vertex to light vector 
float3 lightVec = lightPosWorld.xyz - vertexPosWorld.xyz;
 
// Per-vertex tangent basis
// Note that if you have transformations
// on the model, then you need to xform this
// basis with the inverse transpose of the
// matrix that moved the model in the world
// Just assume an identity here for the world...
float3x3 worldToTangent = float3x3( T, B, N );
 
// Send tangent-space lightVec to fragment 
// program as texture coord
lightVecTan = float4( mul(worldToTangent,lightVec), 1 );
 
...

In the fragment program

...
 
// Grab the bump normal 
float4 bump = 2*tex2D(bump, texCoord0.xy) - 1;
 
// Fast normalize of lightVec in normalization
// cube (normalize() will do too)
float4 lightVecTan = 2*texCUBE( normalCube, texCoord?.xyz ) - 1;
 
// Diffuse dot product in tangent-space 
float diffuseDot = dot( lightVecTan.xyz, bump.xyz ); 
  
...

Hope this helps.

Let me break it down one last time. This is the order of operation before moving into the vertex and fragment shaders.

  • gluLookAt
  • set the light (I could either send the position to the shader by passing it as a parameter or I could call glLightfv and look up the position in the shader with glstate, you choose).
  • draw the scene using vertex buffer objects
  1. What space does the POSITION semantic in cg give you for the vertex?
  2. Depending on what light method you chose above, what space does the light position come in?
  3. What matrix do I have to “mul” with the light position (or light vector) to get the light vector into object space, so that I may “mul” that vector with the tangent matrix to get it into tangent space?

Thank you for all your help so far. I’m just being confused since different people and different sources over the internet seem to say completely different things. I just haven’t been able to get it right yet, and I still am not sure what is coming into the shader in what space. Thank you for your time.

You can send the light position in world space by first pushing an identity onto the modelview stack before calling glLight(). By default, the GL transforms light positions with the current modelview matrix, just like points.

  1. What space does the POSITION semantic in cg give you for the vertex?
    You calculate the position of the vertex in the program, using the modelview-projection combo (mvp). This gives you a position in homogeneous clip-space.
  1. Depending on what light method you chose above, what space does the light position come in?
    That depends on how you specify the position, as mentioned above. You see, it doesn’t matter which space you choose, as long as you’re consistent about it, making sure all vectors are in the same space.

You can move the light into tangent-space, or move the tangent space into light-space, it’s up to you. Just make sure everybody’s in the same space.

  1. What matrix do I have to “mul” with the light position (or light vector) to get the light vector into object space, so that I may “mul” that vector with the tangent matrix to get it into tangent space?
    Again, all this depends on what space you want to work in. The TNB takes a vector into tangent-space. This is convenient for bump-mapping, since that’s where the bumps live.

I’m just being confused since different people and different sources over the internet seem to say completely different things.
Everyone here is saying essentially the same thing, just from a different point of view, and with a different space in mind.

If you don’t know what space things are in, things can get sticky indeed.

OK, I think I got it now. Thank you everyone for all your help.