normalization

I’de like to do simple phong shading (without bumpmapping and in object space) and I want to do normalize both the normals and the light-vertex vector with the normalization cube map, do I need to use 2 texture units for this or can I access 1 normalization cube map with 2 different sets of texture coordinates(i.e. normals and light-vertex vector) using register combiners?

Another question regarding bumpmapping: I’m trying to thoroughly understand everything, but I’ve got a little problem with tangent space generation: I can generate the tangent vector for a triangle, but my models have per-vertex normal, can I simply force the tangent to be orthogonal to each normal or do I need a different method that calculates the tangent vector per-vertex?

Regards,
-Lev

On nvidia hardware you need to use two texture units to normalize two sets of texture coords, on ATI Radeon 8500, I believe you can do it with 1 texture unit.

Just as your normals are different at triangle points, when an object is smoothed, it effectively is an interpolation of effective surface direction, so your tangent space should match this as well. Just make your tangent space matrix correct for each vertex normal, and it should be okay.

Nutty

P.S. All of the above may be bollocks.

I have got no knowledge of Radeon 8500 specific OGL Extensions, but does it really work, that one can use 1 texture unit and adress 2 texture coords?

And if, could someone explain HOW ?

I dislike it, that I have to use 2 texture units for doing the normalisation of my N.L and N.H.
I know there is a approximation, that works via the register combiners, but I´m not sure if this would be faster?

Diapolo

Yes, I’m a bit confused by that statement too.

I remember Carmack saying one can access a texture unit twice on a RadeON 8500.

I looked at the ATI site, but they only thing I found about it was the smartshader paper, which mentioned that radeon 8500 supports 6 texture units, but these 6 textures can be adressed up to 8 times. The ATI_fragment_shader spec says nothing about ATI’s implementation details and limits, unfortunately.

-Lev

Yeah in ATI’s equivelent of NV Texture shaders, you can access the same texture unit more than 1 once, in a single pass, so you only need 1 normalization cubemap. (I think)

Apparently normalizing a vector in register combiners is faster than a normalization cubemap on nv hardware. I vaguely remember Cass saying something like this. Haven’t timed it myself.

Nutty

But renormalising in the register combiners is only possible (practical?) on geforce3 upwards, isn’t it?

Yes, GF1/2 has only 2 combiners, and you need at least 2 for renormalizing. GF3 has 8 combiners.

-Lev

> you can access the same texture unit more than 1 once, in a single pass, so you only need 1 normalization cubemap. (I think)

The ATI Radeon 8500 supports 2 fragment shaders pass (beware: it’s not the same as what we generally call “pass”), with 8 instructions per pass. At the beginning of each of this pass, you can sample from any texture. So accessing the same texture twice is possible: once for the first pass, once for the second one. Now, i’m not sure if you can access the same texture twice in the same fragment pass…

Y.

OK, this is the RC code, for normalisation in the combiners.

Suppose col0 contains interpolated (de-normalized) vector
compressed into [0…1] range
{ // normalize V (step 1.)
rgb {
spare0 = expand(col0) . expand(col0); // VdotV
}
}
{ // normalize V (step 2.)
rgb {
discard = expand(col0); // V in [-1…1]
discard = half_bias(col0) * unsigned_invert(spare0);
col0 = sum();
}
}

Now I´ve got a few questions on that.
NVIDIA sais, that this is faster than CM normalisation.
Is it faster to use more GCs on GF3 / GF4 or is it faster to use multiple passes instead?

And how would the real gl calls look like?
I´m not sure about the second and final combiner, could someone help me ?

glCombinerParameteriNV(GL_NUM_GENERAL_COMBINERS_NV, 2);

// Col0 dot Col0 → Spare0
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_A_NV, GL_CONSTANT_COLOR0_NV, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_B_NV, GL_CONSTANT_COLOR0_NV, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerOutputNV(GL_COMBINER0_NV, GL_RGB, GL_SPARE0_NV, GL_DISCARD_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_TRUE, GL_FALSE, GL_FALSE);

Diapolo

>>Another question regarding bumpmapping: I’m trying to thoroughly understand everything, but I’ve got a little problem with tangent space generation: I can generate the tangent vector for a triangle, but my models have per-vertex normal, can I simply force the tangent to be orthogonal to each normal or do I need a different method that calculates the tangent vector per-vertex?<<

for each point add all the 3x3 matrix’s together + then renormalize that (orthonormalize?)