Downsample normal map


I’ve got a problem concerning the creation of mipmaps for normal maps.

I have found the following comments in the NVidia quake2 bumpmapping example:

/* Structure to encode a normal like an 8-bit unsigned BGRA vector. /
typedef struct {
Normalized tangent space peturbed surface normal. The
[0,1] range of (nx,ny,nz) gets expanded to the [-1,1]
range in the combiners. The (nx,ny,nz) is always a
normalized vector. */
GLubyte nz, ny, nx;

/* A scaling factor for the normal. Mipmap level 0 has a constant
magnitude of 1.0, but downsampled mipmap levels keep track of
the unnormalized vector sum length. For diffuse per-pixel
lighting, it is preferable to make N’ be the unnormalized
vector, but for specular lighting to work reasonably, the
normal vector should be normalized. In the diffuse case, we
can multiply by the “mag” to get the possibly shortened
unnormalized length. */
GLubyte mag;

/* Why does “mag” make sense for diffuse lighting?

 Because sum(L dot Ni)/n == (L dot sum(Ni))/n 

 Think about a bumpy diffuse surface in the distance.  It should
 have a duller illumination than a flat diffuse surface in the
 distance. */

/* On NVIDIA GPUs, the RGB8 internal format is just as memory
efficient as the RGB8 internal texture format so keeping
“mag” around is just as cheap as not having it. */

} Normal;

Does that mean that you would actually need two different normal maps for diffuse and specular lighting, one without normalized and one with normalized vectors or that you have to multiply the RGB-encoded normals with the Alpha (mag) component in the register combiners to get an unnormalized version for diffuse lighting?

Thanks in advance

the second way.
multiply by mag and you get the unnormalized value back (wich is like if you normally filter the normals down and like that some way of supersampling for the diffuse part… thats why you use this there…)

I’m learning more and more about that bumpmapping…
Thanks for your help, davepermen!