# tangent space

I’d like to try implementing per-pixel lighting+bump mapping with Cg, but I still don’t understand how to use tangent space exactly.

AFAIK light position and eye vector must be transformed from object space into tangent space, which is defined by base vectors (tangent, binormal and normal=tangent X binormal) for each vertex, right?

First of all, a good article that would explain how to calculate that object->tangent space transformation matrix would be a good start.

Then, usually my mesh triangles share vertices, so if I have a matrix for each vertex, it would mean that the same matrix would be used for different faces. Is this a problem?

And finally, how can I pass the matrix for each vertex to the vertex/fragment program? Tangent could be passes through COLOR0 but what about the rest of the matrix?

Hmm, can I first calculate tangents for every triangle vertex and then take the average of them just like usually done when computing vertex normals?
Then i could pass the vertex normal as usual and pass the average tangent as a second texture coordinate and then calculate the binormal in the vertex program.
Or is this wrong? Somehow i just couldn’t find that said in plain english.

yes. you can average them.
to send the TNB matrix to the vertex program, you can use the normal as usual. the tangents and binormals can be send through

glVertexAttribPointerARB(…)

which sets up a user defined per vertex data stream.

the following vertex program demonstrates, how to access the per vertex attributes defined through the above function.

``````!!ARBvp1.0
OPTION ARB_position_invariant;

ATTRIB inPos       = vertex.position;
ATTRIB inNormal    = vertex.normal;
ATTRIB inTangent   = vertex.attrib[5];
ATTRIB inBinormal  = vertex.attrib[6];

PARAM lightPos     = program.local[0];

OUTPUT outTex      = result.texcoord[0];

TEMP lightVec;

SUB lightVec, lightPos, inPos;
DP3 outTex.x, lightVec, inTangent;
DP3 outTex.y, lightVec, inBinormal;
DP3 outTex.z, lightVec, inNormal;

END
``````

the program calculates the lightvector from the vertex to the light position and transforms it into tangent space. the result will written into the texture coords of the first texture unit.
the lightpos is passed through glProgramLocalParameter4fARB.

the tangent stream uses stream index 5 and the binormal stream index 6.

regards,
jan

I couldn’t find glVertexAttribPointerARB from the extensions registry, and anyway, I’m using nVidia’s Cg toolkit and the glsl or whatever to fetch the parameters to the vertex programs (the ‘C’ styled ones, not ASM style).

jabe, sholdn’t the inverse of the light vector be dot produced by the tangent base vectors?

Should I calculate the binormal in the vertex program to save some bandwidth? Although I’m using VBO most of the time so it might not save much of it.

sorry, i didn’t get that you want to do it with cg. my post references simple vertex programs.

i don’t understand your question at all. you have to transform the lightvector into tangent space. what i do, is pass the lightposition in object space (done on the cpu once per mesh) into the vertex program, build the object space lightdirection and transform it by the inverse tangent space matrix into tangent space.

if calculating the binormal on the gpu gives a performance gain depends on the system and the application. saving memory bandwidth and gpu memory is always a good approach. try to get to work the algorithm itself and optimize later.

I’m sorry that this topic is going more to the field of glsl, sice I have some problems with that too.

I prefer to write the sl code into a text file and then compile it runtime for allowing different extension versions to be used. My vertex shader compiles fine, but after compiling my fragment shader, cgCreateProgram causes my app to crash. If I intentionally type an invalid expression to the fragment shader, the compiler gives an error and does not crash my app. So it only crashes when the shader has compiled. Weird though. Here’s my function call:

``````Program	= cgCreateProgram(Context, CG_SOURCE, (const char*)File.GetBuffer(),
Prof, NULL, NULL);
if( (Err = cgGetError())!=CG_NO_ERROR )
{
Con << cgGetErrorString(Err) << CEndl;
return false;
}
``````

There’s some of my app specific stuff, but it should basically show what I’m doing. Context is created before calling this one and Prof is set to proper fragment shader profile. At least I think it is .

Apart from that, I’ve coded the tangent vector calculator, and to make sure it works I made my app to render the basis vectors on vertices. Here’s an image, so you could tell me if they look the way they should:

[http://koti.mbnet.fi/blender/poista/basis.jpg](http:// <a href=) " target="_blank"> http://koti.mbnet.fi/blender/poista/basis.jpg
Red=tangent
Green=Vertex normal
Blue=Binormal

I can’t test my shaders before I can fix the crashing issue, so here’re my shaders. Maybe you could tell me any flaws they might have:

``````struct appin
{
float4 Position : POSITION;
float4 Normal : NORMAL;
float2 BaseUV : TEXCOORD0;
float4 Tangent : TEXCOORD1;
};

struct vertout
{
float4 HPosition : POSITION;
float2 BaseUV : TEXCOORD0;
float3 Color : COLOR;
};

vertout main(appin IN, uniform float4x4 ModelViewProj, uniform float4 LightVec)
{

vertout	OUT;

OUT.HPosition = mul(ModelViewProj, IN.Position);
OUT.BaseUV	= IN.BaseUV;

float3	BiNormal=normalize(cross(IN.Normal.xyz, IN.Tangent.xyz));

OUT.Color.x	= dot(-LightVec, IN.Tangent);
OUT.Color.y	= dot(-LightVec.xyz, BiNormal);
OUT.Color.z	= dot(-LightVec, IN.Normal);

return OUT;
}
``````

``````struct fragin
{
float2 BaseUV : TEXCOORD0;
float3 LightVec : COLOR;	// Tangent space light vector
uniform sampler2D BaseTex : TEXUNIT0;
uniform sampler2D BumpTex : TEXUNIT1;
};

float3 main(fragin IN)
{

float3	BaseColor = tex2D(IN.BaseTex, IN.BaseUV).rgb;

float3	BumpVec = 2.0*(tex2D(IN.BumpTex, IN.BaseUV).rgb-0.5);

return BaseColor * saturate( dot(IN.LightVec, BumpVec) );
}
``````

So I would have tangent pre-calculated and stored to second texture coordinates and the object space lightvector given as a uniform parameter before the mesh is rendered.

Ok i found an error in the shader code: I should pass the light position instead of the vector (as you said jabe) and calculate the light vector in the shader.

Still, I’ve checked a load of thers’ implementations of shader loading and still my own one crashes.

I got it working. I forgot to specify that fragment program returns a color, and it seems to be the reason for the compiler to crash.

Also, if you’re using the primary color, you’ll have to scale and bias it (since color interpolators are unsigned).

If I’ve understood correclt, the interpolated light and haöf vectors can get “shortened” if a light is close to surface and normalization cubemap should be used. So if I bind one in texture unit2, can i access it like this in fp:

``````	float3	NrLightVec = 2.0*(texCUBE(Nr, LightVec).xyz-0.5);
float3	NrHalfVec = 2.0*(texCUBE(Nr, HalfVec).xyz-0.5);
``````

The vectors in cubemap are in range [0, 1].