Dot3 vs EVBM: Which is better?

In the sole case of bumpmapping, i fail to understand why numerous people claim that Dot3 has superior quality to EVBM, but i fail to understand why-

Both can operate at the triangle level and linear interpolate 3 vectors::

For Dot3, the vectors are the tangent-space light direction.

For EVBM, the vectors are traditionally the model-space normal, but are more accurate if they are the model-space normal rotated into light space since you can’t represent a complete 3d rotation with the “2d Du/Dv matrix transform” that rotates the normal into light space.

Beyond that however, Dot3 only performs a dot-product and produces a single linear scalar that can be used in the classic 'lambertian equation of lighting.

However, EVBM uses a 2d texture lookup for the perturbed normal which can emulate the Dot3 effect by using a texture with the classic white to black falloff circle image that numerically simulates going from 0 to 1.

Yet, EVBM can make that falloff non-linear, can embed colors, can model anistropy, etc. It has much more potential by using that texture lookup instead of the unchangeable Dot3 math.

The 8/8bit Du/Dv perturbations or even the 5/5/6bit dU/dV/Luminance allow the same changes in shape to occur as a normal map.

Clearly adding EVBM is costly to hardware and performance (1 extra tex unit), but it has the potential to do the same things as Dot3 at greater precision (interpolating floats instead of +/-128 chars)

So does anyone know what am i missing?

EMBM is a pretty weak form of “bumpmapping”. It is rather inaccurate to even call it that, because it is really just a dependent texture read on a 2D texture with an offset and a 2x2 matrix. It’s better described as environment mapping with an offset to approximate a bumpy look.

The only kind of environment map that can be shoehorned into a 2D texture is a spheremap. Spheremaps have all sorts of issues associated with them.

When you try to cram all the spheremap and reflection math into a 2x2 matrix, you discover that you are trying to linearly approximate an extremely nonlinear function.

Even then, you generally have to assume that the surface is flat or results get even worse.

The cost of adding EMBM to hardware is not just an extra texture unit. It’s a lot more than that, because it’s a dependent texture read.

Per-pixel shading using dot products can compute accurate diffuse and specular lighting contributions.

It is true that the “dot3” (I hate that name) texture environment mode by itself, either as found in the dot3 OpenGL extension or in DX7, is fairly weak. That’s why I urge anyone interested in doing real per-pixel shading to use register combiners.

Dependent texture reads are useful, but EMBM is not a very general implementation of them. When we offer dependent texture reads, you can expect that we will offer far more power and flexibility.

  • Matt

Matt,
Thanks for your prompt and detailed response. Nvidia has made a good move in having you patrol these boards. Although i understand your comments, i think my question was a bit unclear. Perhaps i should’ve used the phrase “Dependent Texture Reads” (DTR) instead of EVBM, since my question doesn’t revolve around using its 2x2 matrix or anything other than dU/dV offsets within an environment map.

============
I’m fully aware of the wasted space and poor distribution inherent in environment-maps, but whether one uses a sphere-map, dual-paraboloid, or cube-map, all of these types can accurately represent the clamped result of N dot L, given enough texture-map resolution. When i say “clamped result of N dot L”, i mean that the environment map is the classic circle whose center is pure white and falls off non-linearly to black around its sides, which i’ll call “light reflection mapping” or “LRM.”

What seems improved by doing N dot L via LRM instead of with register Dot3 math is the ability to introduce artistic/anistropic/filtering effects and/or complement it with dynamically generated reflection/refractions.

From what i know, you cannot mix reflection/refraction and bumpmapping with register based N dot L math. Thus it seems a form of environment mapping is needed if the specular lighting has reflection/refraction.

========
LRM can be used on a material basis to store the specular exponent falloff (which may potentially be combined with anistropic details).

From what i know, the next gen of pixel shaders won’t be doing true per-pixel exponentiation for specular, just approximations. Thus it seems LRM is needed.

==========
So what is wrong with LRM?

It seems clear that LRM combined with DTR bump-mapping introduces much distortion since the DTR’s dU/dVs are constant linear offsets into the LRM, which has severe non-linear distribution (particularly if the LRM is using sphere-maps). However, i don’t see why the LRM can’t be built with the distortion reversed into it so that the dU/dV offsets move in spherical space instead of linear space (& thus minimizing the bump distortion).

This would introduce a second step if you dynamically rendered an environment map, since you’d have to render-to-texture and then render-to-texture again using a distorted quad grid to spread the environment map non-linearly in the final texture (like how one has to render a sphere-map on the fly).

However, i don’t see why this wouldn’t minimize the dU/dV offset distortion of DTR by keeping making all offset steps 1 unit on a sphere (which i’d guess will always require a cube-map).

So, my question is, if one can combine DTR with a predistorted cube-map, what is superior about Dot3 other than general-performance/texture-read-bandwidth-costs/texture-unit-costs/etc?

Thanks for any feedback on this-

Hmmm. So you are suggesting using EMBM to implement a diffuse lighting scheme, i.e., do a lookup on the normal to compute the lighting result, and to use, say, (Nx,Ny) as texture coordinates rather than spheremap coords.

The most obvious problem is that your 1D texture can only represent a single hemisphere out of the full sphere of normals.

One of the trickiest but most often ignored things in doing per-pixel is getting proper clamping behavior for self-shadowing on both the bump and object levels. If you look at Mark Kilgard’s bumpdemo, you will see that he put a fair amount of effort into this.

On the simplest level, self-shadowing means that a surface is unilluminated if N dot L is negative, that is, the dot product is clamped to zero before being used. However, on a bumpmapped surface, even if the bumps point towards the light, if the geometry of the surface does not, you should still see a shadow. The lighting contribution comes from N_bump dot L, but if N_surface dot L is negative, it should be zeroed out. The easiest way to implement this is to use an extremely bright OpenGL light source (say, 100 units bright) and vertex normals. The resulting color can modulate the bump contribution.

I do not see how emulating dot product lighting with EMBM can achieve the full clamping required for diffuse lighting. If you shift the normal using the offset outside of the hemisphere, you will not get clamping, you will get distortion.

It seems to me that you still, at minimum, have a fairly nonlinear function, though less so than with spheremapping + EMBM. Not only is it nonlinear, but EMBM faces a second problem in that the 2x2 bump matrix is global state, not per-vertex interpolated state. This is the specific restriction that makes non-flat objects so difficult with EMBM.

  • Matt

I will add that you are correct in that dot product lighting is completely incapable of mixing bumpmapping with reflection/refraction.

However, we are well aware of these limitations and will most certainly address them in the future.

  • Matt

Once again, thanks for feedback Matt-

I guess i didn’t clarify that it only makes sense to use object normals that have been transformed into light space (via cpu or vertex program), => No “global state” light vector. I see no need for EVBM and its 2x2 matrix, just generic Dependent Texture Reads (DTR).

In your paragraph #2, i’m assuming you meant a 2D texture, not 1D. However, i was suggesting a Cube-D texture which provides both hemispheres of normals and thus automatically handle the self-shadowing issue. Perhaps its too costly to use DTR with Cubemaps since there are 3 tex coords that would be offset with a 2d Du/Dv and wrapping x over into z or y over into z would be a new level of complexity.

However, if you do shadow mapping to alpha test the pixels which are lit (& set stencil for further passes if needed), one doesn’t have to handle the self-shadowing (since bumps cannot be concave and thus are always pointed in the facing hemisphere of the triangle in which they are rasterized. We currently use that approach, but really need 4 passes per light to soften the border edges and remove the aliasing that comes with an 8bit shadow depth (not so bad with 16bit depth using a GeForce). Note that this approach does eliminate the case where the N_surface dot L is opposed to N_bump dot L.

Lastly, i’m curious as to any HW approach to mixing bumpmapping with reflection/refraction.

Yes, shadow mapping will handle the self-shadowing (assuming you deal with all the messy problems shadow mapping involves properly). If you want to do things “right”, you do need shadows of some kind (stencil or shadow map), though if you are concerned about getting good real-time performance you may need to skip real shadows and just do self-shadowing.

I do not know of any consumer HW today that supports any form of dependent texture reads on cubemaps. I may be wrong. EMBM only applies to pure 2D textures.

Look at the DX8 pixel shaders. Since I’ve already said publicly that we will support everything there and then some in OpenGL “soon”, you might as well consider it to be preliminary documentation. Specifically, look at the DX8 “texture addressing instructions”. The DX8 documentation sucks so much, though, that you may have trouble figuring out what is what.

  • Matt

Thanks once again-

Although we use OpenGL, we voraciously consume anything in the D3D realm, including X-Box dev stuff. Too bad MS doesn’t spend more of their $$ on documentation. Should cost too much but would maintain developer enthusiasm and outta-box thinking. In any case we are very excited about the upcoming NV20 hardware, which must support DTR and Cubemapping according to the pixel shader spec that allows DTR on XYZW.

If Radeon or Radeon:NextGen supports EVBM & Cubemaps, they haven’t exposed it yet and their dev relations have yet to respond to inquiries about it.

The thing that is of most concern with NV20 hardware is the hope that it will continue to extend the RegCombiners pioneered by NV10/15 and that there will be little cost to switching the Regcombiner states (in 1 batch). Our shading system builds its ‘shade trees’ (what textures/constants to use and the equations to combine them) on the fly to match a given Chipset.

With pixel shaders, the compiler-loader supposedly takes too much time to generate different pixel shaders on the fly. Its useless to us if that’s the case, but we prefer the metal-level EnvComb4/RegCombiners/etc. anyways…

-jake

If you have further concerns about these issues, it is probably better to take them off this forum. My email is in my profile.

  • Matt