Hi all!

Anyone know how can be computed a reflected vector in tangent space in the combiners???

It means: (Rx,Ry,Rz) = (-Vy,Vx,Vz) , without a texture lookup…

Any idea?

See you!

Hi all!

Anyone know how can be computed a reflected vector in tangent space in the combiners???

It means: (Rx,Ry,Rz) = (-Vy,Vx,Vz) , without a texture lookup…

Any idea?

See you!

This is how the reflection vector equation looks for use in the register combiners:

R = 2 * (N*(N dot V) - V/2)

If you understand how to use the combiners, putting this equation in should be very easy. If you use a Cg fragment program (fp20 profile for example) it will *almost* be copy and paste.

-SirKnight

Hi!

Oh, good, I was thinking without bump mapping…and tangent space…

Thanks.

If you use Cg then there is a function called reflect(a,b); As far as I remember it returns vector c which is a, reflected at surface with normal b (check docs). BTW, I have a feeling that reflecting should be implemented in HW.

But that does not work with the fp20 profile. He wants to do this in the combiners (he probably does not have a DX 9 card) per pixel so reflect wont work. The reflect function only works in vertex programs and fp30 profiles.

-SirKnight

You can’t compute the reflection vector in the register combiners, but you can use the NV_TEXTURE_SHADER extension. It supports DOT_PRODUCT_REFLECT_CUBE_MAP_NV texture shader, which calculates the reflection vector per pixel and uses this vector to read from a cubemap.

Originally posted by LarsMiddendorf:

You can’t compute the reflection vector in the register combiners, but you can use the NV_TEXTURE_SHADER extension. It supports DOT_PRODUCT_REFLECT_CUBE_MAP_NV texture shader, which calculates the reflection vector per pixel and uses this vector to read from a cubemap.

Wrong. Cass talked about using the formula I posed above and even gave the nvparse script register combiner code that computes the reflection vector per pixel in the combiners. Yes you can compute the reflection vector using texture shaders but it CAN be done also in the combiners. It just takes two general combiners so it can only be done on GeForce 3 and up. Hell, I am doing this and it works fine.

Here, lets assume tex0 = normal map, tex2 = View vector, and tex3 = L, light vector. Here is a nvparse register combiner code snippet from one of my projects:

{

rgb

{

spare0 = expand( tex2 ) . expand( tex0 );

}

}

{

const0 = (.5, .5, .5, 1);

rgb

{

discard = expand( tex0 ) * spare0;

discard = -const0 * expand( tex2 );

spare0 = sum();

scale_by_two();

}

}

And there you have it. Now just power this result up a bit, I like to use 4*((R.V)^2 - 0.75) for x^16 approximation.

-SirKnight

Hi all!

I try that, and then I’ve found (as a part of my own limitation as a programmer), that I don’t know how to normalize that in 2 combiners, because is in the range -1,1 (not 0,1)

See you.

It will take 2 more combiners to normalize. They mention how to do this in some presentation on nvidia’s dev site. I forget exactly which presentation though. If I find it before you do I’ll post it.

-SirKnight

Ok I found it. In the presentation titled “BumpMappingWithRegisterCombiners.ppt/.pdf” near the end is when they talk about by using an approximation you can normalize some vectors in the combiners.

Here is what it says in the presentation:

Normlize(V) ~= V/2 * (3-V.V)

(~= means about equal)

Where, V is a vector dervied from the interpolation of unit-length vectors across a polygon AND the angle between all pairs of the original per-vertex vectors is no more than 40 degrees or so.

Simplifying it:

V/2 * (3-V.V) = 1.5V - 0.5V * (V.V)

= V + 0.5V - 0.5V * (V.V)

= V + 0.5V * (1-(V.V))

As nvparse script:

(Suppose col0 contains interpolated vector compressed into [0…1] range

{

rgb {

spare0 = expand(col0).expand(col0);

}

}

{

rgb {

discard = expand(col0);

discard = half_bias(col0)*unsigned_invert(spare0);

col0 = sum();

}

}

-SirKnight

Also the description of that is pretty much talking about taking an interpolated vector from say a vertex program and then normalizing it there in the combiners. But this still works if you compute a vector (like the reflection vector from above) and use this technique to normalize it. Then just two more combiners to power it up using the approximation formula I posted earlier and you’re good to go. Only 6 combiners needed here.

The best solution is to use an ARB_fragment_program but you obviously need a R300 or NV30 at least to do that. Well in hardware that is.

-SirKnight

[This message has been edited by SirKnight (edited 03-17-2003).]

Well I gave you the info how to normalize a vector (well pretty much) in the combiners but I must have gone brain dead there for a bit. I just remembered that after computing the reflection vector you DONT need to normalize it because the reflection vector calc preserves the length of the eye vector, and since your eye vector *should* come from a normalization cube map, the refelction vector will be normalized. I’m not able to see my old code right now so I just forgot about that.

So actually doing lighting this way will only take 4 combiners, not 6. Sorry for the slightly wrong info earlier.

-SirKnight

Hi.

I don’t know how you can say 4 combiners to do reflection + dot3 bump (I suppose, you are talking about bump mapping) :

If you use two cubemaps to normalize light and eye vector:

REFLECTION VECTOR : 2 combiners

SPECULAR EXPONENT (^16 or ^32) : at least 2 combiners (maybe 3 for good quality)

CALC DIFFUSE_COEF*DECAL + SPECULAR*SPECULAR_COEF + AMBIENT : 1 more combiner (if you want to mantain vertex colors, 2 combiners)

FINAL COMBINER: fog

With cubemaps: at least 5 COMBINERS!

Without cubemaps: 8 COMBINERS!

See you!

sorry, 1 combiner more because you have to do N dot H and N dot L…

Who iu can do in 4?

No I don’t mean the whole lighting equation in 4 combiners. I’m saying just to do the phong specular part only takes 4 general combiners. Two for the reflection vector then two to power it up. Then you just modulate this result with your diffuse texture map in the final combiner stage. Unless you want attenuation then you will have to have two passes just for the specular part. I have 3 passes total for per pixel bumpmapping using the reflection vector for the specular. I can get it down to two passes if I use a half-angle vector instead of the reflection vector though.

-SirKnight

Hi.

Did you see any difference between half angle vector and reflection vector??? I can see a different deformation on big polygons, but on BOTH methods they fail, and I’m talking about an infinite light and a local eye…

See you.