# Per-pixel distance attenuation on GeForce2?

I’m still beginner on vertex programs, cube maps normalisation and similar stuff, so can be an answer for my question is just obvious, we’ll see
Do you know any way to do per-pixel distance from light source computation/approximation using vertex programs?
Or is that said to be definitely impossible?

Its really easy to calc the light-to-vertex distance in a VP and then send that as a color, into the pixel pipeline. That value will linearly interpolate over the triangle, but that will not give accurate distance per-pixel, only at the vertices. Probably best is to use a small 3D texture for (pointlight) distance attenuation but that’s software and slow on < GF3.

fritzlang

[This message has been edited by fritzlang (edited 12-12-2002).]

I see the only problem with linear interpolation is the case of such polygons, that contain projected light source origin (along polygon’s normal vector).
In this case linear interplation seems in fact not enough.

Hmm I think just came to an idea how to solve this however at “some” extra cost!
We could tesselate each triangle causing “linear interpolation problem” into 4 polys (triangles, quads or 5-gons) in such way that the point on triangle being closest to light source origin would be the center of 2 crossing (tesselating) lines.
With that solution linear interpolation should work 100% ok, no?
Usually there are very few such triangles, so this could be optimized quite well.
What do you think about that method?

[This message has been edited by MickeyMouse (edited 12-12-2002).]

Instead of using a single 3D texture, you can also use a 2D and a 1D texture which, when combined, will give you the same result. See http://www.ronfrazier.net/apparition/index.asp?appmain=research/per_pixel_lighting.html

– Tom

Also by Ron Frazier, somewhere on that site is the second version of its per pixel lighting and, in conjunction with per pixel bump, he is using tangent space vectors, 2D attenuation map for x, y axes and interpolated distance for z axis. This separation is made possible by the using of tangent space basis.

JFYI, this per-pixel distance attenuation can be computed without register combiners (need only GL_ARB_texture_env_add, along with GL_ARB_multitexture obviously) and if you want to make things easier you could use automatic texture coordinate generation instead of calling glMultiTexCoord.

Actually same result could be achieved by 3 1D textures combined, which would take least memory but it would make one extra pass, so combining 2D and 1D textures looks best for cards with less than 3 texture units and without 3d texture support.

Originally posted by vincoof:
JFYI, this per-pixel distance attenuation can be computed without register combiners (need only GL_ARB_texture_env_add, along with GL_ARB_multitexture obviously).

but how do you do that? i bet you have to invert pointlight textures and do other minor changes in the all code.
it would be nice if it can be done - it would be then nvidia-independent

The register combiners compute the following :
Attenutation = Texture0 + Texture1
Final = (1-Attenuation)*Color
where Attenatuion is stored in the Spare0 register, and Final is computed in the last combiner stage.

You can perform the same thing using ARB_multitexture (with two texture units), ARB_texture_env_combine and ARB_texture_env_crossbar. Even though you need to support 3 extensions instead of 1 (NV_register_combiners), more graphics cards support those 3 extensions than register combiners.

Instead of computing Attenuation into Spare0, compute it into Texture Stage 0 ; and instead of computing the Final color into the Final combiner stage, compute it into Texture Stage 1 :
TextureStage0 = Texture0 + Texture1
TextureStage1 = (1-TextureStage0)*Color

To use two textures, you need the ARB_multitexture extension, with two units supported (call glGetInteger with GL_MAX_TEXTURE_UNITS_ARB if you’re not sure).

To perform the Texture Stage 0, you need the ARB_texture_env_combine and ARB_texture_env_crossbar extensions.

To perform the Texture Stage 1, you need the ARB_texture_env_combine extension.

The lines of code should be something like that :

// Enable and bind 2D attenuation map into texture unit 0
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, atten2D);

// Enable and bind 1D attenuation map into texture unit 1
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_1D);
glBindTexture(GL_TEXTURE_1D, atten1D);

// Configure texture stage 0
glActiveTextureARB(GL_TEXTURE0_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_TEXTURE0_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_TEXTURE1_ARB);

// Configure texture stage 1
glActiveTextureARB(GL_TEXTURE1_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_MODULATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_PREVIOUS_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_PRIMARY_COLOR_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB_ARB, GL_ONE_MINUS_SRC_COLOR);

and later you should reset texture stages so that other texture-based rendering will behave correcly. The following code should do the trick (if you’re not using texture combiners somewhere else) :

glActiveTextureARB(GL_TEXTURE0_ARB);
glTenEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

glActiveTextureARB(GL_TEXTURE1_ARB);
glTenEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

(edit: corrected some tokens because, for instance, GL_SOURCE0_ARB does not exist)

[This message has been edited by vincoof (edited 12-20-2002).]

I’ve seen that most NVIDIA cards don’t support the ARB_texture_env_crossbar extension. In fact it’s not a problem because most NVIDIA cards support the NV_texture_env_combine4 extension which is 99% compatible with the ARB_texture_env_crossbar extension (and hopefully the other 1% does not concern this algorithm).