Normal maps break lighting and cause seams

Sorry for the vague title, couldnt find a better way to frame it. I am trying to implement normal mapping into my engine but for some reason certain meshes are rendered with a central seam with differing normals on each side. The images below explain this better.

example 1

and its normal buffer

I check the normal maps and they seem fine. Also, apart from this sudden change, other details in the normal map are rendered quite well.

Here is the code I am using to apply the normal mapping.

vertex shader (TBN matrix calculation)

mat3 normalMatrix = transpose(inverse(mat3(modelMatrix)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
vsOut.vertNormal = N;
// re-orthogonalize T with respect to N
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T); 
vsOut.TBN = mat3(T, B, N);

Fragment shader

vec3 normal = normalize(texture(material.texture_normal, vsOut.texCoords).rgb);
normal = normalize(normal * 2.0 - 1.00392);
float mirrorStep = step(0.0f, dot(cross(vsOut.TBN[0], vsOut.TBN[1]), vsOut.vertNormal));
vec3 newTangent = mix(-vsOut.TBN[0], vsOut.TBN[0], mirrorStep);
mat3 TBNCorrected = vsOut.TBN;
TBNCorrected[0] = newTangent;
normal = normalize(TBNCorrected * normal); 
gNormal = mix(vsOut.vertNormal, normal, step(1.0f, material.hasNormalMap));

The TBNCorrected is the TBN matrix with an inversed tanget value. I suspected initially that this issue could be because of mirrored UVs and hence this was the fix for that. However, this did not make any change to the output

If the normal matrix is orthogonal, there’s no need to use the inverse transpose. And if it isn’t, a) you should perform this calculation in the client and upload it as a separate matrix rather than performing it for each vertex, and b) the tangent vector should be transformed by the model matrix, not the normal matrix.

It certainly looks like the mirroring is at the root of the problem. But I wouldn’t expect such a radical change from miscalculating either the tangent or bitangent, as typical normal maps don’t deviate that much from the normal (normal maps usually look light blue, i.e. (0.5, 0.5, 1.0) being the dominant colour). Even if you flip T and/or B, that shouldn’t affect the result when the normal is (0,0,1).

Where are the vertex normals which are fed to the vertex shader combing from? Have you checked that their values are correct? If you’re generating them from the geometry, are you accidentally flipping them? Do you get the correct result if you use the normal directly rather than using the normal mapping code?

I can’t see how the calculation in the fragment shader will affect anything; as TBN is derived from the normal, the triple product (T×B)·N should always have the same sign.

@GClements thanks for your suggestions. I will pass the normal matrix as a uniform into the shader. that seems more optimal. Also, The vertex normals and tangents are both outputted by Assimp when loading the model, and yes, they work as expected when used directly for lighting calculations (albeit without the normal map details).
Also now that you point it out, it makes sense as to why the corectedTBN based approach does not have any effect. Is there any other way to accommodate for mirrored UVs in this case ?.

So, I think I was partially able to solve the issue. I extracted the bitangent of the model at a vertex using Assimp, and consequently used it to calculate the handedness in the tangent space. I then used that to flip the tangent vector in the vertex shader. It solved the issue on the arc but merely changed its orientation on the fabric.

Another weird thing I observed was regarding normalization in my fragment shader. my current normal mapping code in the FS looks like this

vec3 normal = normalize(texture(material.texture_normal, vsOut.texCoords).rgb);
normal = normal * 2.0 - 1.00392f;
normal = normalize(vsOut.TBN * normal); 
gNormal = mix(vsOut.vertNormal, normal, step(1.0f, material.hasNormalMap));

However, If I remove the normalize at line 1, the issue completely resolves itself for the Sponza model but then the normals look blocky on a different model that I have in the scene (both using the same shader). The blockiness is understandable as I am loading the normal map in GL_RGB format and hence normalization is necessary but I don’t understand how this fixes the seams issue

And on the other model (Crysis suit) it looks like this.

Notice the abrupt changes in the head, shoulder, and forearms. (Ignore the legs, it looks like a wireframe because it is the mesh currently being rendered at the paused draw call)

I feel I am making some mathematical error somewhere but I can’t figure out where. would be great if you could help me out with this

It shouldn’t be necessary to normalise the values from the normal map; if it was generated correctly, they should always lie on the unit sphere (after the x*2-1 transformation from [0,1] to [-1,1]). Any deviation from unit length should be limited to quantisation error (1 part in 127 for RGB8), which shouldn’t be enough to be visible.

If you’re flipping the normal map left-to-right, you need to flip the tangent vector but you need to do that after calculating B=N×T, otherwise it will flip the bitangent as well: N×(-T)=-(N×T). A flipped coordinate system should have the opposite handedness (i.e. negated determinant) compared to the original.

If the model has both tangent and bitangent vectors you should use them. Otherwise, does it have flags to indicate flipping of the normal map? If it could be flipped either horizontally or vertically (or both), you can’t distinguish the two via handedness.

The fact remains that any mistake in the calculation of T or B shouldn’t affect parts of the surface where the normal map contains (0,0,1), and shouldn’t significantly affect parts of the surface where the tangential components are small. Whereas miscalculating (and particularly flipping) the normal will have a drastic effect.

It shouldn’t be necessary to normalize the values from the normal map; if it was generated correctly, they should always lie on the unit sphere (after the x*2-1 transformation from [0,1] to [-1,1]). Any deviation from unit length should be limited to quantization error (1 part in 127 for RGB8), which shouldn’t be enough to be visible.

I did some research on this, and yup I shouldn’t have done this. normal maps are already encoded in linear space so there should not be any need to normalize it again. However, without this normalization, the issue still remained in the Crysis model so I decided to check it with unity. I rendered the model with the standard shader, same maps, and a deferred rendering setup. Turns out the normals look blocky even in unity.

So maybe there is something wrong/different with the normal maps themselves ?. In that case, how are they different and why does the normalization seem to fix it ?

Can you send a link to that normal map file ?

You can try the assimp viewer to see if there are any differences. If you’re on Windows, here is the link (I did not tested it, so just hoping it works). Also it might be worth to have a look at their code to see if yours is aligned with it.

I’ve only just noticed this:

This is backward; you need to transform from [0,1] to [-1,1] before normalising.

Does the issue go away if you fix that?

If it’s still wrong, what does the normal map itself look like? Is it in model space rather than surface (TBN) space? Using surface space allows a normal map to be “applied” to an arbitrary surface as a texture, but if a normal map will only ever be applied to a specific part of the model you can skip the TBN calculation and just store the normals in model space.