per pixel attenuation

I got a problem of implementing dynamic lighting across planar surfaces by per pixel attenuation method without using gl extensions.I actually know how to do it with Quake levels with lightmaps,but I just want to do the lighing without lightmaps.Is there any other way?

If you don’t want to use extensions, you can either use conventional OpenGL lighting or use lightmaps. One solution with multitexturing could be using a attenuation map with a intensity map, generate texture coords for every vertex/light and use some sort of alpha-interpolation.
BTW, if you want to implement dynamic lighting you should use extensions. DL is nothing for old graphic cards.

So there is a method only using multitexture to do it?But I think multitexture could be easily replaced right?So what is it anyway?BTW,do you know how Quake3 does for dynamic lighting across bezier patches?

Thanks

Quake3 tesselates the beziar patches into standard triangles when it loads its maps, so it treats them as any other surface when doing dynamic lighting

Yes,I know that.But I just wondered if there are some optimizations or other ways for it.And also not all of the surfaces in Quake3 have lightmaps,but they still gets distance attenuated lighting effects on dynamic lighting.

The best way is to use per-pixel lighting but that mean use extensions. You can emulate it with textures. For example, attenuation map is a 1D texture with alpha values for which you generate texture coord based on dinstance and blend it with color of fragment.
B.T.W. Conventional lighting in GL has attenuation! What are you willing to do? I don’t understand your problem.

Well I just want to do the dynamic lighting without q3’s lightmap scheme.Say I got a triagle,and all of its details,the vertex coords,texcoords and normal,the tangent space details, then a point light.Then how do you do attenuation?If there is no way of doing it without extensions I will just give it up.

as far as i know, the problem is that for per-pixel lighting, a dot product between two vectors has to be computed to get the brightness of a fragment (light vector * normal vector), and it is not possible to do that per fragment in standard opengl. there are at least three ways to do it with extensions (as far as i know), but no one without them.

Jan

A dot product is simply the sum of three products, which you can do in unextended opengl… if you are willing to do three passes for each dot product.

Actually, OpenGL can’t do subtract directly, so you have to do 6 passes with an approximation of subtraction… I believe Cass’s thesis was about doing bump mapping in unextended OpenGL, you could search for that.

How does per pixel attenuation work if theres a big triangle and the light is right in the middle of it? The attentuation values for the vertices will be very high since the light is far away. Interpolating this value across the triangle results in a very dark triangle even tough the light is standing right in front of it!

That’s per-vertex attenuation you described, not per-pixel.

Simple solution: tesselation with the standard GL lighting model. I think this would be faster than using six passes.

Well how do you calculate per pixel attentuation then? I think you’d need to store the vertex position in a texcoord that gets interpolated for the pixel shader. Then you could calculate the distance from the pixel pos and the lightpos ( which you also need to pass to the pixel shader ) in order to calculate the attentuation. Now if you wanted to do it properly so everything is per pixel, you would also need to calculate the light and the halfvector this way because otherwise the center of light is not interpolated properly across the triangle. To do this, you’d also need to interpolate the objToTangentSpace 3x3 Matrix, which is where I get stuck, since i dont know whether it is at all possible to, correctly interpolate this matrix across the triangle

Thanks for suggestions,but I would not be bothered to do it that way now.Use dynamic lightmapping for quake style lighting or use rc and stencil without quake’s lightmap for better effects performance.

Can anyone explain where the error in my theory lies?

Your theory is too complicated? And how you gonna compute subtraction of texcoords without DX9 hardware? Or at least without extensions(It could be possible to do with NV_texture_shader, I guess, but I woudn’t bother).

Well how do you do per pixel attentuation and per pixel light vector calculation properly then? ( BTW: Where did I tell anything about TexCoord substraction? You dont need that for what i described above ( if it would at all work, what i described since i dont know if its possible to interpolate the tangent binormal normal properly across the triangle )

[This message has been edited by Dtag (edited 05-31-2003).]

The iddea is to compute per-pixel attenuation without extensions.
Say ‘a’ is attenuation factor(0 to 1)

a = attfunc(distance)

r is the radius of the light sphere(point at distance r will have a 0)

Then we compute a 1D map with GL_INTENSITY format. It will be our attenuation map. We activate clamping and fill our map with attfunc values, so that map(dist/r) will give us attenuation factor.

The rest is simple: you compute distance at every vertex and pass distance/r as s texture coordinate for attenuation map. There you got it.

This is the common way to compute attenuation(on <DX9 hardware).

Hmmm I still dont understand in what way this solves the problem that i explained before. If the light is in front of a big triangle the attenuation will still be interpolated which will again result in a high att all over the triangle.

“The rest is simple: you compute distance at every vertex and pass distance/r as s texture coordinate for attenuation map. There you got it.”

Example: if the light is in the center of the triangle … say we have a dist to all verts of 100. The vertex shader will compute the same texcoord for all 3 vertices. Interpolated across the tri, this is again the same coord for all pixels. This is not a true per pixel thing!