Per-pixel lighting problem?

In NVidia docs it is said that is a good thing to rotate the light vector in texture space and let the graphics card to interpolate it on the pixel (which indeed when you do a dot3 operation with the normal everything is OK).
But there is one problem (or maybe it’s not and i don’t see how to solve it): most of the lights in a game are point (or omni in 3DSMAX language).
If the maximum distance of the light is in the middle of a triangle then the per pixel lighting is extended beyond this distance because of interpolation of one/two vertices which have a zero (0) light vector, for example, and the other(s) one/two vertices which have non-zero light vectors.
What i’m saying is logic or am i missing something?

P.S. If what i’m saying it’s true than you have artifacts at the end of a light maximum distance, instead of nice round lighting.

I think the Problem is not that dramatic, cause it only appears on very large triangles. For example an big square, out of two triangles and then a low range Pointlight about it. Then it would look like as if the maximum light range would be to the corners.
And maybe you can counter this problem some way, in the nvidia example with the complex bumpmapping, which includes selfshadowing of bumps, they use some more techniques to let the lighting look good. I never read through it really intensive, i am just using the basic version, but maybe they say something about your Problem in there.

Lars

Its only a problem when the triangle is big or the light is really close to the triangle. The way to solve it is to use a normalization cubemap. Instead of storing the normalized XYZ distance in the primary or secondary color, use XYZ as the STR texture coordinates for a cubemap texture. The cubemap is designed so that outputs a RGB color that is essentially a range-compressed normalized XYZ vector.

Originally posted by LordKronos:
Its only a problem when the triangle is big or the light is really close to the triangle. The way to solve it is to use a normalization cubemap. Instead of storing the normalized XYZ distance in the primary or secondary color, use XYZ as the STR texture coordinates for a cubemap texture. The cubemap is designed so that outputs a RGB color that is essentially a range-compressed normalized XYZ vector.

Interesting ideea! But, aren’t S,T,R interpolated also (i’ve never used cube-maps so please don’t shoot me if i’m wrong)?
I’ve read the documentation of the cube-maps and the only improvement is (as i see it) that the vectors aren’t de-normalized anymore.
If i’m wrong please elaborate a little more of how to encode the vectors in cube-maps (i hope i don’t have to generate the maps every frame).
BTW, the graphics artists usualy don’t use many faces for the floor, ceiling, crates and other things like that so there could be a big problem with the interpolation.

OK, I think you might be looking at this wrong. If you are thinking that the brightness would be 0 at the verticies and 1 in the center, then yes this would be incorrect when interpolated over the triangle. However, that is referred to as vertex lighting. Per pixel lighting only uses the interpolated vertex color to determine the direction of the light, not the actual distance. When doing it this way, the direction needs to be normalized to give accurate results, and when the direction gets interpolated it gets shortened, so thats why you use a normalization cube map to re-normalize the interpolated vector.

If you dont understand this part of per-pixel lighting, then I cant really help you here (its a lot to explain). I would suggest reading up on it. check out the docs on nvidia’s site. Also check out my articles on the topic:

http://www.ronfrazier.net/apparition/research/index.html

P.S.
OK, after rereading this, I realize what I just told you applies more to per-pixel bump mapping. You might just be after simple per-pixel diffuse lighting. With that, You dont really use the color component for distance or direction. You do a trick with the texture units to get distance/brightness. Its probably best if you read up on it. Again, my site or nvidia’s both have good descriptions of this technique

[This message has been edited by LordKronos (edited 03-22-2001).]

they are linearly interpolated, and so s,t,r are the same way interpolated as your normals ( or colors if u use this for later use in the register combiners ), so the s,t,r vector is then your unnormalized normal, and the color at this pixel is the rangecompressed normalized normal with the same direction => you got your normalized normal at every damn pixel

Ya know, I hear a lot about renormalization cube maps. I’ve read about them, and I’ve even seen code snippets that use them for bump mapping.

In all that, I have never seen one word of information on how to CREATE one. What does it look like? What exactly is in it?

Originally posted by Korval:
In all that, I have never seen one word of information on how to CREATE one. What does it look like? What exactly is in it?

Hi,
Download per-pixel bump demo http://www.nvidia.com/marketing/Developer/DevRel.nsf/pages/CB39B012BD6DB09388256833007A2984

Take a look at makeNormalizeVectorCubeMap function.Exactly what you want.

Michail.