Surface normals for a height field

I have to visualize the surface normals for a height field which is given as a texture. My job is only to implement a function vec3 normal(ivec2 tc) in the fragment shader. First I have to use central differences to calculate the tangent tx in x-direction and ty in y-direction for every point/pixel. Both tx and tx are vectors. Then I have to calculate n = tx x y (Cross product) to get the normal. The normal n will be used as rgb for my pixel.

Following variables and function are given:

    uniform sampler2D tex;           // The height field, which contains the height in meters (float) for each point.
    uniform ivec2     texSize;       // The width and height of the field.
    uniform float     hMin;          // Minimum height that occurs in the texture.
    uniform float     hMax;          // Maximum height that occurs in the texture.
    uniform float     lengthScale;   // Length of a single pixel in meters.
    layout(location = 0) out vec4 fragColor;
    in vec2 texCoords;

    /**
    * Fetches the height at a point.
    */

    float fetch(ivec2 tc) {
        return texelFetch(tex, tc, 0).r;
    }

    void main() {
        vec4 color = vec4(0.1, 0.1, 0.1, 1);
        ivec2 tc = ivec2(texCoords*texSize);  // Integer (pixel) texture coordinates.
        color.rgb = normal(tc);
	
	
        fragColor  = color;
    }
    ...

So my question is: How can i compute the tangets tx and ty with central differences and the given variables/funtions? I know that the central difference is
f'(x)=(f(x+h)-f(x-h))/2h
for functions with 1 variable. But here I have 2 variables and also I dont know what exactly to use for h.

This is what I have tried, but it doesn’t have any effect on the texture:

    vec3 normal( ivec2 tc ) {
        vec3 n = vec3(0.0, 0.0, 0.0);
    
        float txz = (fetch(ivec2(tc.x+texSize.x, tc.y))-fetch(ivec2(tc.x-texSize.x, tc.y)))/(2.0*texSize.x);
        vec3 tx = vec3(tc.x, tc.y, txz);
        float tyz = (fetch(ivec2(tc.x, tc.y+texSize.y))-fetch(ivec2(tc.x, tc.y-texSize.y)))/(2.0*texSize.y);
        vec3 ty = vec3(tc.x, tc.y, tyz);

        float nx = ty.y*tx.z-ty.z*tx.y;
        float ny = ty.z*tx.x-ty.x*tx.z;
        float nz = ty.x*tx.y-ty.y*tx.x;

        n = vec3(nx, ny, nz);
  
        return n;
    }

The main issues are:

  1. You’re offsetting by the size of the texture, sampling outside of it; you should be offsetting by ±1.
  2. You’re using the texture coordinates as the x/y components; these should be the offsets.

PS: GLSL has a built-in cross-product function.

Try:

vec3 normal( ivec2 tc )
{
        float scale = (hMax-hMin)/lengthScale;
        float txz = fetch(tc+ivec2(1, 0))-fetch(tc+ivec2(-1, 0)) * scale;
        vec3 tx = vec3(2, 0, txz);
        float tyz = fetch(tc+ivec2(0, 1))-fetch(tc+ivec2(0, -1)) * scale;
        vec3 ty = vec3(0, 2, tyz);
        vec3 n = cross(tx, ty);
        return normalize(n);
}

Ideally, scale should be calculated externally and passed in as a uniform rather than being computed for every fragment.

Also: a normalised vector will have components in the range [-1,1], but colour components are typically limited to the range [0,1]. Signed normalised formats exist for textures, but the default framebuffer will use an unsigned normalised format.

Thank you for your reply.
I tried your code but couldn’t get the expected result.

As is changed

to vec3 tx = vec3(tc.x, tc.y, txz); and the same for ty, it looked more like the surface normals for the height field, but still not 100% as the expected result.

Why did you use (2,0) and (0,2) for the x and y of tx and ty?
And also why are you doing … * scale in txz and tyz instad of … / 2*scale as it should be in central differences?

Because those are the X and Y components of the tangent vector.

That converts the 0…1 value read from the texture to something with the same scale as the X and Y.

For the X direction, you have a line from (x-1,y,z[x-1,y]) to (x+1,y,z[x+1,y]) where all three coordinates are in units of texels. The tangent vector is the difference between those two, i.e.
(x+1,y,z[x+1,y]) - (x-1,y,z[x-1,y]) = (2,0,z[x+1,y]-z[x-1,y])
Similarly for the Y direction. The length of the vector doesn’t matter, only the direction.

In terms of scale, what matters is that the Z coordinate has the same scale as the X and Y coordinates, i.e. a vector of (k,0,k) (for any k) should be at 45° to the horizontal.

Here, I’m assuming that hMin is the altitude (in metres) corresponding to a texture value of 0, hMax is the altitude for a texture value of 1, and lengthScale is the physical size (in metres) of a texel. If hMin and hMax have different semantics, the scale calculation would need to change accordingly.

So the difference between (2,0,(z2-z1)*(hMax-hMin)/lengthScale) and (2*lengthScale,0,(z2-z1)*(hMax-hMin)) is that the former is in texels while the latter is in metres; both have the same direction.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.