I’d like to share a code snippet which generates a texture space vectors on the fly in the fragment shader. All you need is the vertex position, vertex normal and texture coordinates passed to the fragment shader as varyings:

```
mat3 genTextureSpace(in vec2 texcoords, in vec3 vertexpos, in vec3 vertexnormal) {
vec2 st0 = dFdx(texcoords);
vec2 st1 = dFdy(texcoords);
vec3 q0 = dFdx(vertexpos);
vec3 q1 = dFdy(vertexpos);
vec3 N = normalize(vertexnormal);
vec3 T = (q0*st1.y - q1*st0.y);
vec3 B = (-q0*st1.x + q1*st0.x);
//handle mirrored texturespaces
if (dot(N, cross(T,B))<0.0) {
B=-B; T=-T;
}
//orthogonalize B and T to N;
//this way the interpolated vertexnormal "smoothes" the generated texture space across the triangle
T -= N*dot(T, N);
T = normalize(T);
B -= N*dot(B, N);
B = normalize(B);
//the resulting matrix should be used to transform the fetched normalmap-vector
//into the same space 'vertexposition' and 'vertexnormal' are given
return mat3(T,B,N)
}
```

This technqiue is based on

http://hacksoflife.blogspot.com/2009/11/per-pixel-tangent-space-normal-mapping.html

The way to handle mirrored texture coordinates is somewhat guessed, but works nicely. If anybody has an idea, why the problem exists and why it gets fixed this way, I’d like to know. I was kind of surprised to need this, since my CPU based code doesn’t do this. On the other hand, the CPU code needs to explicitly handle vertices where mirrored textures meet (“seams”): such vertices must get duplicated. The fragment shader based TBN generation doesn’t need this kind of treatment.

Another disadvantage of the fragment shader based technqiue is that it sometimes generates a slightly facetted look on skinned models. This is because there’s no averaging of the TB vectors possible as in the CPU based method.

Edit: I just realized that this post should have been done in the GLSL forum… could somebody move it there?