Per-pixel attenuation problem.

I’m doing 3d attenuation with one 2d texture, but for some reason it doesn’t quite work :\

Could someone please take a look at these programs and tell me if I’m doing something wrong (ARB_vertex/fragment_program):
http://www.datranet.net/timo_heubach/theradiantraccoon/fragmentprogram2.txt and http://www.datranet.net/timo_heubach/theradiantraccoon/vertexprogram2.txt

Cheers!

Instead of using a 3D texture for attenuation, just use this:

SUB sometemp, LightPosition, Vertex;
MUL sometemp, sometemp, inverselightradius;
DP3_SAT Attenuation, sometemp, sometemp;

That should give the same results, if not better! (and no need to sample a texture)

Also, instead of doing min/max, use DP3_SAT or use just max (to get rid of negative lighting).

Anyways, I think I found the problem, since I forgot to actually saturate the attenuation before using it, that is, these lines:

ADD attenxy, attenxy, attenz;
ADD attenxy, clamp0, -attenxy;

should have actually been:

ADD attenxy, attenxy, attenz;
ADD_SAT attenxy, clamp0, -attenxy;

Such a simple mistake =D

But I don’t catch your drift. You mean that you should just attenuate based on the distance of some vertex to the light? In a fragment shader?

And btw. I’m not using a 3D texture :]

Cheers!

Yeah, just do the attenuation in the shader it self. The card that supports it is fast enough to be able to all of that stuff in the shader.

Attenuation = 1-((Light-Vertex)/LightRadius)^2

Which can be done with 3 instuctions (which will look better than a texture, and might even be faster).

Unless you really need the light to fall off to black at at certain radius I recommend you use something like 1 / (1 + d*d) instead. Looks more realistic.

This is one of those things that seems really nice but is usually not the best approach. Attenuation is a perfect example of something that is much better done as a mixture of vertex shader and texture operations.

One, if you do the 1-d^2 type trick in the shader on 8-bit hw, it bands like the Who. Using textures to do the same function is great, because you avoid some banding to start with, plus you get filtering, which cleans up any remaining banding nicely.

Two, On r300 and nv30, it will be faster to do the texture-based approach. Plus, the transform into light space is totally linear. Doing this in a pixel shader is simply sloppy. A vertex shader can do this more accurately ( fp32 vs fp16/24/32 ) and faster.

The main advantages of doing it per-pixel are :

a) convenience
b) if you have a really custom attenuation function on quite a large polygon

In the case of b), I would question why you have very large polygons if targeting a r300 or nv30.

There are other per-pixel attenuation functions you can do with textures as well.

1 - d^2 ( mine )
1 ~ e^( d^2 ) ( cass’ )
1 - d

You do 1-d by doing

tex t0 // normalize cube map for L
// v0 contains iterated normalized L
dp3_sat r0, t0_bx2, v0_bx2

mov r0, 1-r0 // 1 - d

AFAIK, this last one was independently invented by John Watte and myself.

To sum up, just be cause it CAN now be done per-pixel, doesn’t mean it should. Please use your fragment instructions to do things that vary non-linearly per fragment.

Originally posted by SimmerD:
AFAIK, this last one was independently invented by John Watte and myself.

Not much of an invention IMO. Back when I used textures for attenuation that was what I used. Anyway, using textures for attenuation is cheap, but doesn’t give the quality of doing something like what I recommended above in a fragment program, or even do the full attenuation math OpenGL does for vertices, but for every pixel.

Using two texture lookups to do 1-D^2 lighting is old hat see: http://www.ronfrazier.net/apparition/index.asp?appmain=research/advanced_per_p ixel_lighting.html
and http://www.shaderx.com/direct3d.net/tutorials/shader/shader5.html

If you want to do 1-D^2 lighting without texture lookups you can do this:

(In the following (x,y,z) is the vector from the light to the vertex interpolated across for each fragment and “r” is the radius of the light - see the above articles if unsure how to generate of calculate these)

When doing 1-D^2 attenuation you usually do:

// ((x/r),(y/r)) are the (s,t) values of the texture0 and (z/r) are the (s) values of the texture1

tex t0
tex t1
add r0, 1-t0, -t1.a ; 1.0 - (t0+ t1.a)

//Attenuate an existing color or bitmap
mul r0, c0, r0

This lookups two textures and combines the result.

However I find doing:

// ((x/r),(y/r),(z/r)) are the (s,t,r) values of the texture0

texcoord t0

//Calculate the attenuation
dp3_sat r1.rgba, t0_bx2,t0_bx2

//Attenuate a bitmap or color in r0
mul r0.rgb, r0, 1-r1

Can look just as good, uses no textures, 1 less texture instruction and the same amount or arthimatic instructions (if you have to multiply by a bitmap or color anyway) (There are some small banding artifacts but they are no worse than the texture lookup method - and on a bump-mapped surface you do not notice it)

Using this you can avoid extra textures that need to be bound when rendering and you may not need extra passes.

In the (ps1.4)ATI demos they allow any type of attenuation with a dependant 1D texture lookup -The theory goes something like this:

-First,get the d^2 value as above.(texture corrdinate then a dp3)
-Then use this value to do a dependant texture read into a 1D
attenuation map that has values pre-computed for all the distances.
Seperate diffuse and specular values can be encoded into the one lookup
texture with one in the rgb and the other in the alpha.

Doing this you can have really bizzare attenuation like sin(d), cos(d)
or any other equation that is limited to the 0…1 range

Here is some psudo PS1.4 code (not tested,purely for reference)

//Look up the texture coordinate data
texcrd r0.rgb,t0

//Get the distance squared
dp3 r5.rgb,r0_bx2,r0_bx2

phase

//Lookup the 1D attenuation texture
texld r1,r5

//Multiply diffuse attenuation by current diffuse
mul r0.rgb,r2,r1

//Multiply specular attenuation
+mul r0.a, r3.a,r1.a

In PS1.2 this would be (assuming you pass the appropiate biased
coordinates in t0 and un-biased coordinates in t1) (I have not tesed
this, so I do not know if t1 is clamped to 0…1 before the dp3, and if
it is, this technique will not work.-Needs to be clamped to -1…1 range)

texcoord t0
texdp3tex t1,t0_bx2

//Multiply diffuse attenuation by current diffuse
mul r0.rgb,r2,t1

//Multiply specular attenuation
+mul r0.a, r3.a,t1.a

In PS 1.1 we have to use a 2D attenuation lookup (only need 2 pixels
wide) as a 1D dependant texture lookup is not supported. (it dose not
really matter what value is in t2)

texcoord t0
texm3x2pad t1, t0_bx2
texm3x2tex t2, t0_bx2

//Multiply diffuse attenuation by current diffuse
mul r0.rgb,r2,t2

//Multiply specular attenuation
+mul r0.a, r3.a,t2.a

Of course if you needed attenuation outside the 0…1 range, you could also use x2 and x4 tricks on the texture lookup to get a 0…4 range.

Hope that helps (Of course if you are using ARB_fragment_Program the above is really mute as you can do all the calculations yourself)

[This message has been edited by sqrt[-1] (edited 03-05-2003).]

[This message has been edited by sqrt[-1] (edited 03-05-2003).]

re: 1-d

OK, and invented by Humus. I thought it was a bit nonintuitive at first, but I guess not.

re: old hat

Yes, it is old hat 1999 or so. See the references of the references you site.

Remember that doing per-pixel lighting doesn’t imply you have a noisy bumpmap. If you are trying to do a smooth metal surface with bumps used for rivets, the 1-d^2 per-pixel will be visibly banded. The experiments I saw on a gf4ti showed a big visible difference from doing filtered texture fetches rather than the calculation. On an 8500, which has higher per-pixel precision than a gf4, this may not be an issue.

I wholeheartedly agree that doing a dependent texture is a great way to go on higher shader models.


Using two texture lookups to do 1-D^2 lighting is old hat see: http://www.ronfrazier.net/apparition/index.asp?appmain=research/advanced_per_p ixel_lighting.html
and http://www.shaderx.com/direct3d.net/tutorials/shader/shader5.html

Here’s another one trick for doing per pixel attenuation. It’s been invented by me (correct me if I’m wrong).

It works for projected spotlights and it gives 1/z distance function (yes! 1 over z). It also removes spotlight backprojection for free, uses no additional tmu (besides the one for spotlight map), and not even one pixel shader instruction! Impossible? Here it goes…

Use 3d texture for your spotlight map. This texture is of dimension N x N x 2 where N is the spotlight map size so it only takes twice as much space as the standard 2d spotlight map. The spotlight map is placed at depth 1 and all texels at depth 0 are black.
Turn bilinear filtering on, turn mipmaps off and set wrap modes to clamp_to_edge for s,t and r. Set up a texgen for s and t as for standard projected spotlight (camera matrix followed by a perspective projection). The texgen for r should generate a constant positive value A which scales an attenuation distance. The whole texgen matrix should look like this:

S1 S2 S3 S4
T1 T2 T3 T4
0 0 0 A
0 0 -1 0

where Sx and Tx are from spotlight projection matrix. Such matrix generates texture coordinates of the form:

s t A -z

where z is the vertex depth in spotlight space. Generated r value used for texture addressing is -A/z (division by q). And here is the magic:

  1. For any positive value of z (behind the spotlight) it gives a negative value for r - clamped to 0 by the clamp mode. Sampled texels are always black - everything behing the spotlight is black.
  2. For z in the range [-0, -A] r coord is in the range [inf, 1] - clamped to 1 by the clamp mode. It gives the spotlight map at full intensity.
  3. For z in the range [-A, -inf] r coord is in the range [1, 0] - spotlight smoothly fades to black due to bilinear filtering. The attenuation function is 1/z !

One major drawback of this method is that the mipmaps must be turned off. They ‘collapse’ third texture dimention to 1 texel and that screws up everything of course. If only we could specify the mipmap usage separately for every coordinate…

Kuba

PS. Sorry for my english, especially for the misuse of the ‘the’ word

[This message has been edited by coop (edited 03-07-2003).]

I think I actually computed the attenuation in the fragment shader, before I changed to the texture attenuation. And since if you have a texture of size 512x512 as an attenuation map, you can do 3d attenuation with just the one 2d texture. I think that’s kind of handy and I actually need the accurate falloff, if I want to project shadow volumes to the light range. But I don’t do that atm. so that’s not a necessity. Btw. this has been bugging me for some time: does ATI have branching support in any extension for shaders/programs, since I can’t find any :\ The ARB_vertex_program doesn’t support them, and I haven’t seen any in EXT_vertex_shader :\ I know that NV_vertex_program_x_x or what ever they’re called :slight_smile: supports brancing, but ATI seems to support it only in D3D9 as predefined behaviour, that is non-dynamic. Or am I wrong, does ATI have support for brancing in some extension, have I missed it?

Cheers!

hm… i always use (1-dst^2/light_radius_squared)^2

fast, simple, is 0 at dst = light_radius, and looks like exp(-dst^2/radius_squared), means like the gaussian approach for attentuation… or so…

looks pretty natural for being very simple. i actually developed it in two different ways individiually and got twice that equation, proof enough for me its the way to use

difference to reality is negitible for the speed gain, espencially on low-end hw.