Polygon edges visible while appying normal map

Hello,

I’m having a noticeable sharp edges when appying normal map shading effect over a mesh, as it can be seen below:

This effect get stronger the more deformed the mesh is, for a plane mesh this is not visible at all, the more deformation (height variation in this case) the more this is noticeable.

But when not appying the normal map, those edges are smoothed out as it should be, as the normals are averaging well)

this is my code to perform the normal mapping:

varying vec3 vNormal;
varying vec3 vPos;
varying vec3 uvs;

if(object.normal_map){
	vec3 map = normal.rgb * 2.0 - 1.0;
   	mat3 TBN = computeTBN(vNormal, vPos, uvs);
   	vNormal  = normalize(TBN * map);
}

and the method computeTBN is calculating the tangent and bitangent on the Frag Shader

mat3 computeTBN(vec3 N, vec3 p, vec2 uv){
    // get edge vectors of the pixel triangle
    vec3 dp1  = dFdx(p);
    vec3 dp2  = dFdy(p);
    vec2 duv1 = dFdx(uv);
    vec2 duv2 = dFdy(uv);
 
    // solve the linear system
    vec3 dp2perp = cross(dp2, N);
    vec3 dp1perp = cross(N, dp1);
    vec3 T = dp2perp * duv1.x + dp1perp * duv2.x;
    vec3 B = dp2perp * duv1.y + dp1perp * duv2.y;
 
    // construct a scale-invariant frame 
    float invmax = inversesqrt(max(dot(T,T), dot(B,B)));
    return mat3(T * invmax, B * invmax, N);
}

I’ve trying to search fro this issue on the web to no avail so far. Is there any known name for these artifacts I can look up for and read? Or is there any solution I am missing out?

In my point of view this seems linke some tansformation I must be missing and not applying correctly to the mesh in order to recognize the correct direction of the primitive, but so far haven’t got any good result.

dp1 and dp2 will have discontinuities at triangle boundaries, and so will anything derived from them.

Try rendering using T or B (normalised and mapped to [0,1]3) as the fragment colour.

Rendering with T:

	T = normalize(T);
	T = T*0.5+0.5;

Rendering with B:

	B = normalize(B);
	B = B*0.5+0.5;

Rendering with normal map

Is there something I should be looking for?

Is there any workaround or way to reduce this effect?

Yes: you have to provide proper tangents and bitangents. These are not things that can be calculated accurately within a fragment shader. They must be per-vertex attributes interpolated across the surface.

So after your response I got to find some artiles about this issue, where the like you said the derivative method for normal maps would only work for flat shading as the B and T would be faceted, so thanks for the info on the matter

Yes, I need them to be processed on the VS in order to interpolate them, and since there is no dFdx for vertex, I will need to pass them manually.
I was trying to avoid that mostly because my only concern is having to pass more information on VBOs when they might not be needed in the future while rendering if they won’t use normal maps when having the mesh structure completelly separated and unaware of the materials they will have dynamically.

You know what they say about premature optimization… This is a very good example of a premature optimization. You don’t know if you’ll need it yet, you don’t even know if it’ll be a performance problem for you, but you’re trying to optimize around it and causing yourself problems now.

Instead, get it working the proper way now. If it turns out that you actually do need a case where you’ve no normal map later on, deal with it then. It might be perfectly valid to put a “fake” normal map (a 1x1 texture with RGB 0.5/0.5/1.0) on these objects instead, or you might do another shader permutation, or another VAO.

This is not even an optimization, in the best case this is worse then optimizing, optimizing would be using the TBN matrix on VAOs because performing calculations on the shader side is worse than passing the data through. And what I want is improve the rendering result quality, not optimize it.

When I said “my only concern is having to pass more information on VBOs” I wasn’t talking about optimizations rather having to pass the GPU more information that could become useless.

I already have my system stable enough and running for a while now, now that the system is running for a couple of years it’s time to improve the quality output on these little things (that were left behind), so it has nothing to do with optimization.

Regarding this issue I don’t think you understood what I meant but “not knowing if it will be needed later on”, this meant that as this is a general purpose usage framework, I need to separate the VAOs (Models) from the Shading properties (Materials). I can’t know beforehand what material each model will have, mostly when the materials can be dinamically changed.

Ex. having 10 objects in the scene, 1 could have a normalmap with some roughness on it, and those other 9 could be smooth without any normalmap, therefore I could avoid having 9 meshes without useless information like the TBN on it, and only have it on one of them, but I can’t know when/if/and to which objects this will happen as only the end user will know.

So I either end up with all the VAOs having this data on them and deal with it, or I need to recalculate the meshes when someone needs to put a normal map on it.

I would assume the first case is more suitable and is what is standard throughtout many implementations, altthough I will have to investigate it.

Do you have a reason other than performance to care about passing data that may not be used?

Models cannot be completely ignorant of how they’re going to be rendered. You cannot interchangeably shove any model at any material and expect it to work. Each material has certain requirements about what information is available, and some of those requirements have to be fulfilled by the model.

A material that applies a texture needs a texture coordinate, and computing one out of thin air is usually not viable (not to mention that such a computation becomes part of the “material”, since you will need to parameterize its generation in some way). A material that applies a tinted surface color probably needs it to come from the model per-vertex. Etc.

The idea that the two are unrelated is incorrect. They don’t need to have a 1:1 correspondence or anything, but materials have needs and can only be used with models that fulfill those needs. And your system needs to account for this.

How you account for this is more or less up to yourself and your application’s needs.

I guess it is more of a fear of filling up the memory when having a lot of meshes loaded up, but I’m guessing this might not have that much impact the way I think I could.

Looking it throught that perspective, one can tell that the BTN data can be seen the same way as UV data, therefore, some can use it and some don’t. Although this all boils up to: if classic textures (such as albedo/color and emissive) are more common or not than normal maps. And that could be my decision question.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.