I’ve currently setup a method to look at two objects to see if they share any vertices (like two wall segments touching) and if so, take the normals at those vertices and add them together to smooth out the seam where they touch. This is like a bumpy cave wall, and I believe I’ve reached the desired affect.
What I’m not sure on is if I should be normalizing the normal values after adding them together. When reading the documentation on glNormal(), which I’m not using, it states that, “[n]ormals specified with glNormal need not have unit length. If GL_NORMALIZE is enabled, then normals of any length specified with glNormal are normalized after transformation.”
It appears that enabling GL_NORMALIZE does have a positive affect on my results. Just to show what I mean, here’s some screens:
Without the operation performed at all:
With operation but GL_NORMALIZE not enabled either. Much to shiny…
So I guess the end of all this… is the question of how much overhead I’m looking at with GL_NORMALIZE enabled? I can easily (as soon as I figure what the fixed point math will look like, I’m not using floats) perform the normalization prior to the rendering loop but would just as well move on to bigger things if I don’t need to do that.