Dark edges on terrain heightmap

Hi,

On the terrain I generate using a heightmap, I encounter artifacts where the mesh edges can be visible as dark marks on the terrain.
Strangely, the grid quads appear darker, but not the triangle edge in the quads, which even tends to be lighter.
The problem appears when the terrain is steep enough, and with a certain lighting angle.

Here is a picture of the problem (here is the diffuse component only):

Here is the code I use to generate the normals:

glm::vec3 Terrain::vertexNormal(int x, int y)
{
    glm::vec3 normalAdjacentSurface1 = surfaceNormal(
      retrievePosition(x, y),
      retrievePosition(x+1, y-1),
      retrievePosition(x, y-1));

    glm::vec3 normalAdjacentSurface2 = surfaceNormal(
      retrievePosition(x, y),
      retrievePosition(x+1, y),
      retrievePosition(x+1, y-1));

    glm::vec3 normalAdjacentSurface3 = surfaceNormal(
      retrievePosition(x, y),
      retrievePosition(x, y+1),
      retrievePosition(x+1, y));

    glm::vec3 normalAdjacentSurface4 = surfaceNormal(
      retrievePosition(x, y),
      retrievePosition(x-1, y+1),
      retrievePosition(x, y+1));

    glm::vec3 normalAdjacentSurface5 = surfaceNormal(
      retrievePosition(x, y),
      retrievePosition(x-1, y),
      retrievePosition(x-1, y+1));

    glm::vec3 normalAdjacentSurface6 = surfaceNormal(
      retrievePosition(x, y),
      retrievePosition(x-1, y),
      retrievePosition(x, y-1));

    glm::vec3 normal = glm::vec3(0.0f, 0.0f, 0.0f);

    normal += normalize(normalAdjacentSurface1);
    normal += normalize(normalAdjacentSurface2);
    normal += normalize(normalAdjacentSurface3);
    normal += normalize(normalAdjacentSurface4);
    normal += normalize(normalAdjacentSurface5);
    normal += normalize(normalAdjacentSurface6);

    normal = normalize(normal);

    return normal;
}

glm::vec3 Terrain::surfaceNormal(glm::vec3 P0, glm::vec3 P1, glm::vec3 P2)
{
  return (normalize(glm::cross(P1-P0, P2-P0)));
}

glm::vec3 Terrain::retrievePosition(int x, int y)
{
  float height = getVertexFromWorldCoordinates(x, y)->height;
  return(glm::vec3(x,height,y));
}

In the fragment shader, I use a simple Blinn-Phong model. Here is the diffuse part (which is where the problem comes from):

  float diffuseStrength = 0.9;
  vec3 norm = normalize(Normal);
  vec3 lightDir = normalize(lightPos - FragPos);
  float diff = max(dot(norm, lightDir), 0.0);
  vec3 diffuse = diff * lightColor * diffuseStrength;

What could I do wrong?

One thing which stands out is that normalAdjacentSurface6 has the opposite sense to the other five normals.

Also: the weightings are uneven: a quadrant which is split along the diagonal has twice the weight of a quadrant which isn’t. When generating vertex normals from face normals, it’s common to average the unnormalised cross products and only normalise the final result. This way, smaller triangles and smaller angles have less weight.

But for a height map, you’d typically generate the normals directly from the vertices without considering the triangulation, e.g.:

dx = (z[y+1][x+2]-z[y+1][x+0])/(2*xstep);
dy = (z[y+2][x+1]-z[y+0][x+1])/(2*ystep);
dz = sqrt(1-dx*dx-dy*dy);

Thanks for these insights GClements.

I have corrected normalAdjacentSurface6, and weighted normalAdjacentSurface3 and normalAdjacentSurface6 by multiplying them by 2.
Though it is surely more correct, the problem is still here:

I have also tried your last simplified proposal, but it yields poorer results globally (maybe because I use hydraulic erosion so my terrain contains abrupt height variations). Also, I process normals one single time so I’m not concerned about performance here.

I’ve just noticed that dx and dy should be negated. Also, dz=1 is more accurate, but requires normalisation (which you’re doing anyhow).

Other than that, you could try setting the fragment colour to the normal value, remapped to the [0,1] range (i.e. fragColor=(norm+1)/2) to see if there’s any obvious pattern to the error.

I have observed normals like you suggested, which confirms that the normals do present the same artifacts. I have also observed that the problem never appears on flat terrain, no matter terrain orientation.

I wonder if this could be related to the way I interpolate neighbour data to calculate vertex normals.

For now I just average neighbour faces normals (so no interpolation of the vertex normals).

Maybe I can try something like bilinear or bicubic interpolation, to smooth this more.

I think you may be right.

IIRC, you’re just LERPing (linear blending) the normal vectors together, which shouldn’t give you a smooth rotation of the normals.

This reminds me of the joint collapse artifacts you get with Linear Blend Skinning, where due to the LERP, you’re cutting straight across the interior of the joint (causing joint collapse) rather than rotating “around” the joint, which preserves joint volume. Also when blending vectors with LERP + normalize, you don’t get a constant rotation speed, unlike SLERP. The solutions involve: doing the later not the former. For instance using Spherical Blend Skinning (SLERP) or Dual Quaternion Skinning (which can smoothly interpolate rotation and translation).

See below for related paper. I’ll bet @GClements can point you to a better, more recent work. I’m on my phone, or I’d look a bit more

In particular, see Figure 2 for what I was referring to w.r.t. LBS.

Thanks for your input and the article Dark_Photon!

Indeed, I have implemented bilinear interpolation and it definitely improves the case. There is still darker areas around some edges but the mesh is indeed distorded at these places. This is not pathological anymore. I may implement bicubic interpolation just to see.