# Landscape lighting looks a little.. odd

Hi there!

I’ve been working on my final year project at university for a little while now and have just created a landscape for things to take place on. I’ve been coding in OpenGL for a good while now, but every time I create a landscape and calculate the normals I get the feeling something is going wrong.

The algorithm I’m using is the standard “calculate every face normal, then work out the vertex normal by adding together the face normals from all the faces surrounding the vertex and averaging them”. I must have checked the code for this a million times but something still isn’t right…

Certain areas of the landscape end up getting an odd chequerboard pattern on them. This is an example of what appears:

and a regular view of the landscape:

So… basically I want to know if this is a common occurance based on the angle of the light, or whether it indicates a problem somewhere in my landscape construction or my normal calculation.

If you would like me to post any of the code used to render the landscape I’d be glad to oblige.

-Andy

Hi Andy,

just a quick thought. Do you render the terrain ( possibly based on some heightmap matrix ) using GL_QUAD* or using triangles ?

Some months ago, I was doing something similar, and some of the Quads were “broken” in half, because of topology of the terrain. It resulted in bad lighting look. The “broken” look was simply caused by fact the vertices of one quad were not in one plane.

So try to render it using triangle strip, if you don’t do it now, else I do not know

Bye,
Petr

I’m actually already rendering the terrain with a triangle strip, however, looking at the landscape in flat shaded mode it appears as though a lot of adjacent triangles have the same surface normals. I’m not sure whether this could be an indication of a problem or whether it’s just a coincidence.

They’re not adjacent in a quad shape though, they’re side by side… As an example:

Time for some more staring methinks…

I see 3 possible reasons:
-you calculate 1 normal per quad instead of 1 normal per triangle
-you use wrong vertices when calculating triangle’s normal
-you add wrong normals when calculating final normal (at vertex)

Anyway - you can consider using lightmap (or normalmap) instead. In this case you compute one normal per quad and store these in texture. Then you just apply this texture to landscape instead of doing per-vertex lighting.
Using lightmaps also gives you the possibility to put static shadows into lightmap at no cost.
Of course if you want dynamic lighting then you can forget lightmap and static shadows, but normalmap will do just fine.

Hi,

The algorithm I’m using is the standard “calculate every face normal, then work out the vertex normal by adding together the face normals from all the faces surrounding the vertex and averaging them”. I must have checked the code for this a million times but something still isn’t right…

It’s not too long ago I made something similar, for general 3D models. I’v done it this way:

• Calculate normals for each polygon
• Assign this normals to each vertex
• Find all vertices from various polys, which has the same XYZ
• Average the vertex normals

Hope it is not the same as you posted above ( my English input/output needs update ), but this way, the results were pretty usable.

Hope it helps.

Bye,
Petr

[EDIT: too late ]

Originally posted by k_szczech:
I see 3 possible reasons:
-you calculate 1 normal per quad instead of 1 normal per triangle
-you use wrong vertices when calculating triangle’s normal
-you add wrong normals when calculating final normal (at vertex)

I am certainly calculating per triangle - I do a plane normal calculation for 0,0 0,1 and 1,0 and then 1,0 0,1 1,1 for every square of terrain, and I’ve re-written this part of the algorithm using both proper plane calculations and an optimised subtracting heights method with the same outcome.
I also tested this with a small section of land and printed the coordinates of the vertices the algorithm was using to calculate the surface normals, and they appeared to be correct.

I have also rewritten the other bit you suggested could be wrong; calculating the vertex normal using the surface normals. I have used both the inline method of calculating the surface normal and adding it to the top of each of the three vectors that make up the triangle and then normalising all the vertices at the end, and the “Sliding Six” method of creating an array of surface normals and then looping through the vertices and calculating their normals from the 6 surrounding triangles. Both with the same result.

Those too were my initial assumptions over what was wrong, but I can’t understand how the same problem could still exist after I have used two different algorithms for each section, read from different sources, and the outcome is exactly the same! Hence why I was hoping someone else might have run into the same problem

I do like your idea of the lightmap though, since I was hoping to add some static shadows and what not - hopefully it won’t be too much effort for the computers this has to run on (they don’t have hardware support… grr…), since I’ve spent most of the last 3 weeks getting objects to render only when they are absolutely needed.

Thanks again for your help, it really is appreciated!

Just to make sure, you are normalizing the normals? You mention averaging the face normals together for a vertex, which I think might give you this sort of result if you didn’t normalize again afterwords.

Originally posted by <Anon>:
Just to make sure, you are normalizing the normals? You mention averaging the face normals together for a vertex, which I think might give you this sort of result if you didn’t normalize again afterwords.
Yeah, I’ve tried normalising them again after taking the average, although I do normalise the face normals as I go along, so the aveage should still be a unit vector. Just to make sure, I enabled GL_NORMALIZE and it made no difference

adding it to the top of each of the three vectors that make up the triangle and then normalising all the vertices at the end
Yup, as <Anon> says: normalize before you add, and then again, after you add.

hopefully it won’t be too much effort for the computers this has to run on (they don’t have hardware support… grr…)
I bet these computers have TNT2, but no drivers installed. So kick someone’s butt to install them.
You can always try Mesa - it’s a software implementation but it’s way better than Microsoft’s (don’t even try to compare these).
Anyway, if you really end up in software rendering, then do not use texture filtering and perhaps there is a chance that you will be able to use texture.

I had the same problem - our computers supported DirectX but were not able to run OpenGL (not even in software!) - and these were ‘advanced computer graphics programming’ classes

Originally posted by k_szczech:
[b]
I bet these computers have TNT2, but no drivers installed. So kick someone’s butt to install them.
You can always try Mesa - it’s a software implementation but it’s way better than Microsoft’s (don’t even try to compare these).
Anyway, if you really end up in software rendering, then do not use texture filtering and perhaps there is a chance that you will be able to use texture.

I had the same problem - our computers supported DirectX but were not able to run OpenGL (not even in software!) - and these were ‘advanced computer graphics programming’ classes [/b]
Hehe, yeah, I know exactly how you feel! I’m at one of the top universities for computer science in the country and we don’t even have hardware support on our computers. I’ve tried to get something done about it, seeing as all the computers actually have GeForce 2 cards in them, but they claim they won’t install the drivers for support reasons, in that if all the computers spontaneously ignite then they can’t get compensation from nVidia. Or something.

They are running Mesa though, so at least I can get something out of them! Incidentally, all my stuff is in Linux, so I don’t have to worry about crappy Microsoft implementations

Back to the problem anyway, I just had an interesting development…

I was going to give up hope and go down the route of lightmapping, so I thought I’d ask 3D Studio Max if it could create a landscape for me using my heightmap and then generate a lightmap from that. So. I built myself a plane and deformed it using my heightmap… and would you believe?

There are the chequers!

Now, you can’t tell me 3D Studio Max’s normal calculations are wrong

So… This leads me to believe that it’s actually my heightmap that’s become corrupted. The reason for this, I presume (since I was using a commercial quality one) is because I scaled it down from 1024x1024 to 256x256 using paintshop pro to take up less memory. So it looks like paintshop pro’s resizing algorithm adds artifacts when it resizes things. Does this sound like a reasonable conclusion?

I seriously hope that using a different heightmap will fix it; this has been driving me crazy for far too long!

[EDIT: nope… 3DSM still has chequers even with the original heightmap… so weird!]

-Andy

I think the “checkers” you’re seeing are what’s sometimes called “Mach Banding,” an artifact that results from linear interpolation of color (or quick changes in color/intensity over a short distance) and subtelties in perception.

If you interpolate normals instead, the effect is far less severe (but still noticeable). Here’s an image of a surface with normal interpolation using a plain white texture and an orange light source.

You can still notice a little banding in the foreground, but with textures it’s practically invisible.

Originally posted by Minstrel:
I think the “checkers” you’re seeing are what’s sometimes called “Mach Banding,” an artifact that results from linear interpolation of color (or quick changes in color/intensity over a short distance) and subtelties in perception.
Hey, thanks for the information - I was beginning to suspect that some kind of interpolation issue was causing the artifacts since it only seems to be present in areas that have lots of sudden changes in height.

Thanks everyone for your suggestions - I’d probably still be wrestling with my code to see where the invisible typo was if I hadn’t asked!

Incidentally, does anyone have any suggestions as to where I should place my single light source in order to minimise this problem, or just a good place in general?

Currently I have it in a fixed position far off the the side of the landscape and quite high above it. I’m not really sure how the placement of the light would affect the look of the terrain, save for moving it around by trial and error doesn’t really help matters!

Should the position of the light source change as the camera moves around? I find that if I look downwards toward the landscape it becomes a lot darker, and when the camera is on a level with the landscape it tends to look a lot brighter. I’m just using a simple gluLookAt statement with the vectors coming from my quaternion-based camera.

Cheers

Hi,

would it be possible to give us the terrain data ( in RAW or something easily readable ) to examine them and possibly play with it to find the good light position/problems in mesh ?

Thanks,
Petr

Sure!

This is the original of the height map I’ve been using, in RAW form and in the TGA form I import into my program (they’re both about 2 megs!):

The last one is a scaled down version I’ve been using to save memory, it’s only 200k.

Many thanks!

Hi,

thanks for the data. I’m now playing with it. I was just confused a bit because I awaited RAW of size 810 000 bytes, and now I see its 3 times bigger.
I was wondering why the terrain looked different .

So now I presume it is 3 bytes per Y coordinate, hope to post some results soon.
Just one thing - how big division factor do you apply on the Y coordinates ? I’m now using 10, but I’m not sure it is what you do.

Bye,
Petr

I’ve never used RAW format before, so I just converted the TGA to RAW in Paintshop Pro and uploaded the result

On my 256x256 TGA terrain, I create the terrain 256 units by 256 units and the height as 10 units high at the heighest point (I get the data and then divide it by the maximum value possible in the y coordinate and then multiply that by 10).

Is that reasonable?

Originally posted by jide:
In what way? The landscape is generated using a triangle strip, so if that was wound incorrectly the backface culling would could loads of issues.

I can see from the wireframe view of the landscape that the triangles are oriented like this:

|__
|\ |
|_|__

(How crude?) And that when I am calculating the surface normals I use the correct vertices…

Where else could a triangle winding issue crop up?

Cheers,
-Andy

I speek regarding this : http://www-users.york.ac.uk/~adk105/weird2.JPG

I had similar things recently and discovered that unfortunately the triangle windings were reversed. Back face culling was also enabled also as lighting.