lord cronos:
the whole lighting sheme you’re refering to is a VERY BIG HACK
it is BY NO MEANS IN ANY WAY RELATED TO REALITY
its just one thing:
fast
as the gpu’s get bether programable and faster in rendering, you can do bether approaches to simulate lighting…
and yes, bumps do change on parts where there is no lighting. why? cause there is INDIRECT lighting surrouding it. and even if its quite smooth distributed, EVERY lightray has a direction so you get for a more correct ambient term direction of the indirect lightsources that affect the object… so yes, if you rotate, your bumps are enlightened differently…
from where my approach is?
from nvidia i think, but i’m not sure anymore…
the base idea was to do image based lighting, wich is done yet for the specular term of the equation. result: perpixelbumpmapped reflections with cubemaps… they are by no means correct reflections, but assuming you have some environment info you can generate a cubemap from it you get quite good results with this…
now the diffuse term and ambient term are in fact the same, ambient just does not exist…
how to do the diffuse then? say you have an envmap surrounding your object, then you can look up the whole hemisphere “above your surfacenormal” to get the lighting (with convolutionfilters on cubemaps, heavy math
)
now this means if the normal faces to some very lightened part of the cubemap it will be bright there, else not…
now for the “diffuse” we use normally we can simply use this for pointlights like before… max(0,L.N) where L = normalized point to light and N = normalized surface normal…
now the ambient is resulted by all those not that bright “lightsources” from the cubemap, wich are not direct pointlights (else we would have millions of those to calculate)…
say for a landscape engine this means every normal facing up, will see blue, every normal facing down will see… well… depends… say green for a nice landscape
… that means you have to dot the normal against the point_to_sky vector (wich is straight up) to find an interpolationvalue from blue to green…
now because the dotproduct is signed you have to map it to some unsigned range… *.5+.5 this is for…
for arbitary lightsources it works quite well to say in direction of the light, our ambient term gets more bright… bright means warm colors, like the yellow of the bunny
dark areas mean cold colors, like the blue of the bunny…
this means that if the normal is looking in direction of the lightsource, it will catch more of the warm part, cause the lightsource will “warm up” the whole environment map hemisphere in this direction…
of course, this is a VERY ROUGH approximation…
but the results are awesome good if you take a look at the bunny (the pic is not by me but from some pdf about this whole topic but i dunno from where it is again… i’ll take a look around when i find the file on the hd… google should be smart enough
)
but i like your approach as well… saying if the normal is straight up the face normal it will get full light, the more it faces away from it the less light it can catch…
your’s is independend on lightdirection, so it could be precalculated at normalmapgeneration btw
well… just grab the blue component out of your normalmap and you have it (GL_BLUE in the registercombiners
)