Ambient light

I have been playing with per pixel lightning.
Now my question is how do u guys handle ambient light.I have read Ron Frazier’s paper about the issue.
How ever that solution to ambient is not good.
Coz the bump mapping only appear if it shine a light over the surface.And in the ambient part of the world it looks completly different.



dont use a constant color, but lerp (depending on dot of normal to light)

hope you got the inspiration…

I think i got the idea.

Here goes, abmient light vec is always parallel to the normal vec right? so i just put the normal in tangent space.

That would be:
ambient = Basemap*(N’.N)*ambient color

where N’ comes from the normal map and N is the normal vec transformed to tangent space

Or am i way off here

Thanx for the reply davepermen

[This message has been edited by Tandy (edited 05-05-2002).]

I dont really think a bump map makes sense for ambient lighting. The way a bump map works is that it adds shading/shadowing to portions of a surface that are not orthagonal to the light vector. The odd thing about ambient lighting is that there is NOT a light vector because ambient light has NO DIRECTION. Ambient light, by definition, comes equally from all directions. Thus, all surfaces are lit equally and uniformly, a concept which is contradictory to the purpose of a bump map.
I believe my solution to ambient lighting is indeed correct.

Ron Frazier

The reason bump-mapping looks wrong in pure ambient conditions, is because ambient is a fudge. There is no such thing as ambient in the real world.

I think Dave’s picture illustrates something that looks alot more realistic.

You know, somehow when I read this thread the first time, I didnt look at dave’s picture (not sure why). After looking at it, I can see why you would want to do bump mapped ambient lighting, and N’.N seems like a reasonable approach to this.

why he wants this is simple:
yes ambient has no bumps
but world has no ambient
everything in shadow would be black, not ambient

so why then ambient and not black?
not in light means not black in realworld because of:
indirect illumination/global illumination

well… your approach (N.N’) or my approach (N.Point_to_Light)*.5+.5 to find some ambient lighting value are ways to get some pseudoglobalillumination… wich looks bether… well… play around and you’ll see

(my one is ambient = lerp(n.point_to_light*.5+.5,lightenedcolor(yellow),darkenedcolor(blueish))

Not having actually done this, Im trying to picture this in my head and on paper, so indulge me here for a moment.

First, with regards to your (N.Point_to_Light)*.5+.5 formula, how did you come up with this? Do you have some sort of reasoning, or did you just pick it out of thin air?

Second, the odd thing about your ambient calculation is that the bumps will change when the position of the light changes. Im not really convinced that would look correct. Of course, my imagination doesnt really do the best bump mapping, so maybe Im just not picturing it right.

Its all a big hack anyhow, so I gues one hack is as good as another?

I dont agree with u guys that ambient light has no bumps.Bumps are just a way of adding detail with out increasing poly count.

Dave i dont really get ur solution.But u need a cube map if the light is close right?

in N’.N you wouldnt need one.So everything can be done in one pass on a GF1.

But i really like the way ur ambient light look.(Really nice)


lord cronos:
the whole lighting sheme you’re refering to is a VERY BIG HACK
its just one thing:

as the gpu’s get bether programable and faster in rendering, you can do bether approaches to simulate lighting…

and yes, bumps do change on parts where there is no lighting. why? cause there is INDIRECT lighting surrouding it. and even if its quite smooth distributed, EVERY lightray has a direction so you get for a more correct ambient term direction of the indirect lightsources that affect the object… so yes, if you rotate, your bumps are enlightened differently…

from where my approach is?
from nvidia i think, but i’m not sure anymore…
the base idea was to do image based lighting, wich is done yet for the specular term of the equation. result: perpixelbumpmapped reflections with cubemaps… they are by no means correct reflections, but assuming you have some environment info you can generate a cubemap from it you get quite good results with this…

now the diffuse term and ambient term are in fact the same, ambient just does not exist…

how to do the diffuse then? say you have an envmap surrounding your object, then you can look up the whole hemisphere “above your surfacenormal” to get the lighting (with convolutionfilters on cubemaps, heavy math )

now this means if the normal faces to some very lightened part of the cubemap it will be bright there, else not…

now for the “diffuse” we use normally we can simply use this for pointlights like before… max(0,L.N) where L = normalized point to light and N = normalized surface normal…

now the ambient is resulted by all those not that bright “lightsources” from the cubemap, wich are not direct pointlights (else we would have millions of those to calculate)…

say for a landscape engine this means every normal facing up, will see blue, every normal facing down will see… well… depends… say green for a nice landscape … that means you have to dot the normal against the point_to_sky vector (wich is straight up) to find an interpolationvalue from blue to green…

now because the dotproduct is signed you have to map it to some unsigned range… *.5+.5 this is for…

for arbitary lightsources it works quite well to say in direction of the light, our ambient term gets more bright… bright means warm colors, like the yellow of the bunny

dark areas mean cold colors, like the blue of the bunny…

this means that if the normal is looking in direction of the lightsource, it will catch more of the warm part, cause the lightsource will “warm up” the whole environment map hemisphere in this direction…

of course, this is a VERY ROUGH approximation…
but the results are awesome good if you take a look at the bunny (the pic is not by me but from some pdf about this whole topic but i dunno from where it is again… i’ll take a look around when i find the file on the hd… google should be smart enough )

but i like your approach as well… saying if the normal is straight up the face normal it will get full light, the more it faces away from it the less light it can catch…

your’s is independend on lightdirection, so it could be precalculated at normalmapgeneration btw

well… just grab the blue component out of your normalmap and you have it (GL_BLUE in the registercombiners )

can’t find it anymore except a japanese paper where you can find the ambient = n dot l * .5 + .5 in (can’t display the japanese font so i get a looooooot of dots representing chars )

Its Lord Kronos!!! Get it??? With a “K”!!! Not a “C”!!! OK??#$*#)@@

(sorry…just had to pick on you after all the perman/permen posts)

Thanks for your additional insight. I guess I would have to see both ways in motion to see which would be better (Im having a hard time picturing it).

About the GL_BLUE thing, I didnt even think the whole issue through like that, but your right, thats it…already right there for you. And the register combiners even let you select that component individually. Im always impressed by how complete their design was. Its like they accounted for practically everything. Seemingly any time I come up with an idea like this, I can always somehow just squeeze it right into there.

I came into this thread originally thinking it was a pretty useless topic. Quite the opposite. Thanks.

If you have shaders, you can simulate a slightly better form of ambient light by doing some things to your diffuse.

For example, you can map your diffuse such that normals pointing straight away from the light source cause light at level A, and normals pointing straight at the light cause light at level A+L. In this way, diffuse light will “wrap around” the entire object, making ambient look a little more distinctive, but the entire scene lighting looks as if it was taken in dense fog.

You can also get fancier by adding the two together, making for a flexible lighting solution:

Lout = A + La * (max(N dot L,0)) + Lb * ((N dot L)+1)/2;

This can be done in a vertex shader, or on regular fixed-function hardware by burning a texture unit (NORMAL_MAP texgen, and an appropriate texture matrix + texture bound).

Originally posted by LordKronos:
Its Lord Kronos!!! Get it??? With a “K”!!! Not a “C”!!! OK??#$*#)@@


Thanks for your additional insight. I guess I would have to see both ways in motion to see which would be better (Im having a hard time picturing it).

look at the bunny and you have it pictured yet i’m currently ill caused by fever and influenza so i dont have the power to code it myself… sorry… but i will… sometimes… in far future… (when i got such a thing: )

About the GL_BLUE thing, I didnt even think the whole issue through like that, but your right, thats it…already right there for you. And the register combiners even let you select that component individually. Im always impressed by how complete their design was. Its like they accounted for practically everything. Seemingly any time I come up with an idea like this, I can always somehow just squeeze it right into there.

well… then you’re in luck… here on my gf2mx i always wanted to do this, or that, and always some funny restriction of the combiners dropped any change to implement it (found sometimes about 4 ways of nearly doing something but all the time something was in my way… grmbl.)
just for an example: i would have gotten perfect perpixellighting inclusive specular power 32 with selfshadowing in 2 passes with ALL VECTORS NORMALIZED PERPIXEL (normal,point_to_light,halfangle) but then i got the [0,1] rangeclamping between general combiners to feel… so i could do it with the scene at half brightness… haha (now when i think about it i could do much more after seeing there is no need for normalizing halfangle anymore (found a hackaround ), finding the brighterthanone glowing idea (implemented successfully by richard nuttman on etc… but who wants to code for a gf2 anymore if he can look at such a golden board? and who wants to code for this if you can see the opengl2.0 specs… and who wants to code for this if he can raytrace… oh well…

I came into this thread originally thinking it was a pretty useless topic. Quite the opposite. Thanks.

just one thing:drop everything you know about blinn and phong and that stuff… go on for papers about global illumination and all that, learn how correct lighting would work. then you see where and how and why blinn and phong in some cases do make sence, but that if the power is there it is bether to look for a more accurate solution…

just one thing… as long as we can’t illuminate our scene by about 100 billion particles every frame moving around in the whole scene (our lights you know ), we have to use some statistical approach… means looking at how it would be, and find a nice fast solution looking near to it…

there i found some bethers than blinn and phong in my eyes… used successfully in my raytracer for example… but oh well… i need to upgrade my p3 500 as well

>>Its Lord Kronos!!! Get it??? With a “K”!!! Not a “C”!!! OK??#$*#)@@<<


my take on the whole issue
we all know ambient light doesnt exist in the real world, right.
the addage of ‘treating every light/surface as a diffuse light’ is the way to go.
its as slow as hell BUT its how the world around us works (at least in the big details ie perhaps subparticles dont work this way but bugger them mate), for a while now ive been exploring this method, its not quick (it can be improved though) BUT personally the results are really worth it ie a 10,000 polygon object with only diffuse + reflected light looks better than a 100,000 polygon object with stencil shadows + ambient.

(btw davepermen the bunny looks cool, who cares if its a hack this is why i say as long as the brain saiz hmmm that looks realistic then it doesnt matter)
(also for the cats out there if u wanna see blue light come to central otago, blue hazish shadows on the hills the whole day) i though that was only a joke or something u only see in twilight afore i came here, mindblowing man the scale of it all

Well i have read a little more about lightning.How far away do u guys real time raytracing is. Now that would be cool.If u start working on a engine now u should take a look at real time raytracing.

It could be possible within a year or two.

Dave u said u have made one how complex is it to do it in real time?


it is simple c(±++ ) so not very fast
it is only planes and spheres so not that useful
it is only 160x120 so not that big…

but you know what? its acceptable to run on my old system… p3 500 about 3 years old now…

i would love to rewrite this for an athlonxp2200+ or something like that…

i’ve written one for triangles… could get some triangles in realtime on small res as well (even at 320x240 on the athlon1.4gig of my friend…yeah )

and this with c++ only!

realtime raytracing is opengl2.0 away…

realtime raytracing is directx9 away…

its about one year…

technique for doing it on gpu’s is yet there, but as they are not designed for this programability they are a) slow
b) very unpresious… 8bitcomponent per vector is simply not that useful for representing rays…

here, running on a ati radeon8500 (never seen in realtime… would love to see an exe, i know one with radeon…)

there are papers of siggraph2002 about how to implement rtrt on gpu’s…

it will be here, soon…
and it will rock

and it will have the very same lightingproblems as we have yet now…
it will have perfect sharp shadows (at the beginning… the faster the spu’s, the more stochastic can evolve => softshadows)

i guess its 10 years till global illumination

if one gets metropolis raytracing into hardware, even sooner… in about 5 years then…

well… for static scenes… dunno how fast to set up the scene for the gpu yet… dunno dunno… we’ll see…

oh, spu is no fault… its streaming processor unit… as we have to drop the current
vertex-processor->rastericer->pixelprocessor architecture and replace it by some more general one…

Ok maybe realtime raytracing is a bit to expensive for real time.But how about BRDF-based lightng now that look very nice to.Thats a lightmodel i will try.

Everything can be done on the GPU.
In a vertex shader, and reg combiners.

But i cant see where i can apply bump maps in BRDF.Maybe someone can tell me?


[This message has been edited by Tandy (edited 05-07-2002).]

brdf needs 4 input angles… to represent the directions to the eye and to the light…

if you take a close look you’ll see that those angles are dependend on surface, means they are in tangentspace…
or the other way, they are dependend on the normal as well…
that means you need perpixeltangentspace… well thats not that easy possible but i think you can get the angles according to the perpixelnormal and the pervertex tangentspace quite good… (hoping )
result should be bumpmapped brdf’s…

Try a search on shift-variant BRDFs.