well, eventually, i will have a geforce4 in a few days. so i decided to start learning about the tex shaders. i understand it i think, but one little thing i dont. i the holy grail bump mapping, the shaders compute N*H, N*L and use it as tex coords in an illumination map. how does this map looks like? how is it created?

I guess you are talking about Blinn bump mapping, since you mentioned the half vector H.

You may consider doing Phong bump mapping. It has advantage in interpolation of specular lighting (best to see in large flat areas), but disadvantage in performance, and need for two passes if you need color texture as well. A good guide can be found of all places in the NV_texture_shader spec itself. Look under the question: " Does this extension support so-called “bump environment mapping”?"

To your question, N o L and N o H are computed by the second and third, and fed to a 2d ‘lighting’ map bound to the third texture unit. (The first texture unit fetches the normals in standard 2d texturing). This map is really a composition of two 1d maps which can be though of as look-up-tables. The first table takes the N o H which is the specular angle cosine, and does the exponentiation. The second table takes the N o L which is the diffuse angle cosine and simply gives it back. In addition, the texture uses clamping in both coordinates (both 1d ‘tables’) so the texture lookup also performs the negative clamping (obtuse angles don’t light).

The texture format is intensity-alpha – a two component texture. This is because it needs to ‘combine’ the output of two ‘tables’

The map contents are therefore simply:

Image [ s , t ] = { s ^ e , t }

where e is the specular exponent

intensity = s ^ e , alpha = t

This intensity and alpha are then available to the register combiners , which perform the lighting equation calculation – combining ‘material+lightSource’ properties such as ambient color diffuse color, emission color, specular color (or intensity), texture color ( you can use the fourth texture unit for color texture lookup).

I managed to do it in two regsiter combiner stages (plus the final stage)

so this way we can use any exponent we want for the specular. i mean, s^e, any e we want, unlike in geforce2 or 1 where must multiply s by itself in the reg combiners, so muximum is 4. right?

[This message has been edited by okapota (edited 04-02-2002).]

Yes, you can use any specular exponent you like with the DOT_PRODUCT_TEXTURE_2D texture shader style bump mapping. You can get a higher exponent than 4 with register combiners, but the artifacts begin to be bothersome above 8.

Also, you should be aware that the specular is somewhat sensitive to vector interpolation artifacts. Higher specular exponents will require more dense meshes to reduce the impact of denormalization of the interpolated H vector.

Thanks -

Cass

you can have a nice exponent of 32 if you want, and it does not look too artefacted…

http://tyrannen.starcraft3d.net/PerPixelLighting

just press ‘n’ in the demo to switch between approx ^32 and real ^32 (2passes)

yes, you can go to high exponents if you like.

You won’t get the banding artifacts that appear with the successive squaring-truncating method (exponentiation in register combiners), since you compute the exponent when creating the table (texture) in float accuracty, and then truncate once.

It may be also possible to use the HILO texture format that outputs two 16-bit values, though I didn’t look into how this output goes into the register combiners.Those are fundamentally 9-bit, so you’ll have to do mighty tricks there…

Anyway, from my experience, I don’t think the extra precision is necessary, and high exponents look nice and smooth (if you like your materials that way…)

so the main difference between phong and blinn lighting, is that phong is using a refletion vector from somewhere to compute the specular, and is not tesselation dependent because it is possibile to compute reflection vectors per pixel with the texture shaders, and blinn uses half vector which is supplied per vertex and makes the method tesselation dependent. am i right?

Yes, you are basically correct.

To put it in a short sentence, Blinn has the half vector, and Phong has the reflected vector.

But there are many differences that come from that in the actual implementation.

The implementation we are discussing here (call it ‘H’), does the vector math in local surface space.

The Phong implementation discussed in NV_texture_shader (call it ‘P’) implements transform from local surface space to ‘light’ space per pixel, using three texture stages doing dot3 op.

‘B’ uses one 2d texture object

‘P’ uses two cubemap texture objects

‘B’ is more sensative to vector shortening due to interpolation (Cass’s point)

‘P’ is less sensative to that

‘B’ needs a vertex program to transform the H and L vectors into local surface basis (by the 3 basis vectors given for each vertex)

‘P’ needs a vertex program to transform the 3 basis vectors of the surface-local space to light-space (so the per-pixel transform takes the fetched normals from surface space directly to light space, where the cubemaps are built)

okapota,

The tessellation dependence of Blinn style specular comes from *what* you interpolate, not because the vectors are supplied per vertex. I tried to implement tessellation independent Blinn specular on GeForce3 but it lacked just a little bit of flexibility. I’m pretty sure it can be implemented artifact-free on the GF4 TI ( and Radeon 8500 ). I just ordered a Radeon8500 exclusively for experimenting with specular ( and new lighting models in general ).

There’s a thread that about the tessellation dependence here,

http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/003519.html

I think the basic visual problem with Blinn interpolation is not the normalization, but the following fact:

Assume a flat area or the bump map on the a triangle.

Assume an infinite viewer and infinite light lighting model.

The maximum of the specular cosine in Blinn is never inside the triangle. It is always on one on the vertices.

The maximum of the specular cosine in Phong, can be inside the triangle.

To see why: the H vector is the same for all three vertices (because light and viewer are infinite – just directions). The three normals on the three vertices are N0 N1 and N2. Any point inside the triangle has three interpolation coefficients, a0,a1 and a2, a0+a1+a2=1

The interpolated normal is a0*N0 + a1*N1 + a2*N2 and the specular cosine will be
H o (a0*N0 + a1

*N1 + a2*N2)

(‘o’ is inner product)

whish is equal to

a0 * (N0 o H) + a1 * (N1 o H) + a2 * (N2 o H)

This epression attains maximum over a0,a1,a2 always at extermal values of (a0,a1,a2) , where two are zero and one is 1.-- at the vertices

On the other hand, the expression

ReflectVector( E, a0*N0 + a1*N1 + a2*N2) o L

(where ReflectVector(X,Y) reflects X around Y, E is the eye direction vector, and L is the light direction vector)

can attain maximum anywhere.

This is why Phong can have a nice circular specular highlight inside a triangle

The problem is that it’s incorrect to linearly interpolate the H vector. You need to interpolate displacement-to-light and displacement-to-eye, then compute the H vector ( there are other correct ways, such as interpolating angles ).

Personally, I prefer Blinn style to Phong. Blinn highlights look more real as they move across surfaces.

PH, when the light source is infinite, and the viewer is infinite, the H vector is the same for all the vertices. Such lighting model is also supported (and fastest, and default (the infinite viewer)) in the fixed function OpenGL lighting

I agree, it’s certainly valid .

yeah, its that often true, that, when i walk around, i’m infinite far away of the stuff i’m watching. and the lightsources i use to enlighten the objects are very often infinite far away, too thats why i can hold them in my hand or they hang infinite far away on my ceiling… or i reed my new mail on my infinite far screen illuminating my infinite far from the screen around the screen placed room where i sit in infinite far away from it… haha

Seriously, its not so easy to visually tell the difference between local viewer andinfinite viewer lighting models.

And look at the performance numbers of infinite vs. local light, you’ll want to stick to infinite …

never i know the speed-differences but who cares? me does, i don’t want crappy unlogical stuff as a base in my engine and as i’ll use vertex-programs probably, who cares?

Yes, vertex programs offer a huge performance increase when implementing per-pixel specular. In any case, vertex programs will likely not be the bottleneck.

Using an infinite viewer is really not a good idea ( unless you like the chrome effect in Half-Life ). When looking around by moving the mouse, the highlights should remain fixed on the objects, this requires a local viewer.

Anyway, my point about the interpolation artifacts was aimed at local lights and viewer. If you use an infinite viewer and infinite lights, there won’t be any interpolation artifacts.

yes sure, but its theoretical stuff…

Originally posted by PH:

. If you use an infinite viewer and infinite lights, there won’t be any interpolation artifacts.

Yes there will be. That was exactly my point in the long post with the little math. Interpolation in Blinn will not produce specular highlights (maxima) inside triangles, whereas Phong will.

well well, i lost you. anyway i got my answare. before i start philosifying about these methods, and go into the depth of all this math, i think ill better just code a few apps, and see if they work. maybe than ill have a clearer understanding.

PH, you ordered a radeon just for expirimenting? where do you have the money to buy such an expensive card just for expirimenting?

anyway, where is this phong shading described? in what presentation? i have them all, but i probably missed it.