per pixel attenuation

Well,maybe I’m completely wrong or trivial,but if we got a precaculated attenuation map,with the surf tangent space details we blend the map with a quad of constant dimension,using stencil to make sure quad only blend within the surface,and use glColor to filter the atten map color according to the light color and distance. So what’s wrong with it?

“we got a precaculated attenuation map,with the surf tangent space details we blend the map with a quad of constant dimension”

How do you blend an 1D Attentuation Map with a 2D Texture?

“using stencil to make sure quad only blend within the surface”

Hmm ive heard of a per pixel lighting/attenuation technique that needs the stencilbuffer

“and use glColor to filter the atten map color according to the light color and distance”

Distance to what?

no,I mean 2d attenuation map,just like a lightmap.

“Distance to what?”
Distance to surface.The further the distance,the lower a variable k to be mutiplied by the rgb params glColor recieve.

Maybe even if it does work,it’s unpractical,and may not work well.

Well you cant use the distance to a surface in a per pixel program because every point of the triangle has probably a different distance to the light.
What you describe really seems to be a lightmap only. Iam talking about real per pixel lighting with per pixel attenuation and per pixel light vector calculation.

Dtag, the question(as far as I understood) was to make attenuation without extensions, so, without real per-pixel lighting. Your example is good, but it’s an extreme case. If you use such algorithm as I proposed, you should take relativ small triangles to avoid interpolation issue(like per-vertex specular lighting).

I never said anything about extensions. In fact iam using pixel/vertexshaders ( cg ). What Iam doing IS real per pixel lighting, because the final diffuse value is calculated per pixel and not per vertex. The only problem i have is that the light/half vector and the attenuation is not interpolated properly across the triangle in situations like the one i mentioned before.

Ah and btw: about your suggestion to tesselate the surface. I cant do that because its provided by the map format which i dont want to change. Tesselating the surface at runtime is something i dont really want to do.

[This message has been edited by Dtag (edited 06-01-2003).]

The simplest way in vanilla GL that I have found is, to use a combination of two textures. One 2D texture holds a radial gradient that represents your attenuation function in 2D, the other is a 1D texture that holds the attenuation function for the 3rd dimension. You can either render in two passes or multiply the two using texture environment combiners. The resulting color is your 3D attenuation function, with the texture coordinates as input. Simple object space or eye space texgen can generate the necessary texture coordinates for you, and then you can even move and scale the light source using the texture matrix.

Dtag, vertex/pixel shaders are extensions

Oh, at DX9 hardware you can pass vertex and light coordinate per texture coords and compute the dist vector in your fragment shader. On GF3-4 … hm… I think it’s just impossible to do perfectly. Maybe you can use some texture shaders tricks. Or use attenuation maps