Easy peasy bump mapping?

Let me make sure I fully understand the concept of object-space bump mapping.

OK, per-vertex, you must transform the light from world-space into that vertex’s space. How do you do that in a skinned mesh? Do you take the inverse of the skinning matrix (or use inverse matrices from the start)?

Granted that, you then, per-pixel, take this light vector (in object space) and dot the normal from the bump map with it. So, precisely, what is the bump map? Is it just a recording of the normal at a particular vertex? This is not a trivial thing to produce, and it has to be altered if the mesh changes. How do you go about building one of these bump maps from a height map (and, of course, texture mapping from that height map onto the model)?

It is the normal of the surface in object space. It doesn’t need to change if the mesh changes because you use the mesh deformation matrix to transform the light into object space for any changes that might apply to the mesh. Think of the object space normal as the absolute orientation of the surface at each point before any deformations are applied to the mesh.

I’m mot sure about your question means w.r.t. always using the inverse matrix, basically if you look at my transformation chain you move from left to right using the matrix and right to left using the inverse matrix, however transforming a point through a matrix is not the same as transforming a vector :-). So effectively you’re using the other matrix anyway.

See jwatte’s post on this.

[This message has been edited by dorbie (edited 09-28-2002).]

First, to get the light into object space, you transform the light by the inverse of the normal transform matrix (which would take an object-space normal into light (world) space).

As the normal matrix is the transpose of the inverse of the position matrix, to be fully correct, you have to invert that again. However, if your animation doesn’t use scale or shear, the normal matrix is just the position matrix with elements 12/13/14 zeroed out, so the inverse is just the transpose of that.

Regarding generating normal maps, it’s really no harder generating those in object space than generating them in tangent space. Of course, it IS harder to generate these than to just take some pre-baked map and slap it on an existing mesh. I’m aware of two ways of generating normal maps:

a) Take a heighfield-style bump map, and run a highpass filter on it to generate the normal map. The local differential in the bump map must be applied to the direction of the normal, which means that you have to know how the texture is mapped onto the object. Either this is implicit (for tangent space maps) or you have to examine the geometry to figure out which direction the normal, the S and the T coordinates point.

b) Take a low-poly version of your mesh and a high-poly version of your mesh. Shoot rays out from the low-poly version and find the closest intersection with the high-poly mesh. You can look at the distance traveled and generate a height map that way, and then run option a), or you can just look at the normal of the high-poly object at the point where the ray finds it. ATI has some nice tools and demos for this technique.

If by changing the normal map when the mesh changes you mean that, if you in your modeler change your mesh, you have to re-calculate the normal map, then that is correct. That’s usually part of your export/compile/prepare/package tool chain. Tools, in general, are, IMO, the biggest part of creating a new engine these days.

This is really an interesting discussion! I’m learning by the minute! Also I have implemented the tangent space dot3 bump mapping in my engine! I will continue work on skinned meshes and will investigate doing object space bump mapping. Pretty much tanks to you guys/gals!

A screen:

hem.fyristorg.com/tobias.ohlsson/EngineII/bump_screen.jpg

I is not that pretty, but it is a start!

How could you accomodate colored lights, multiple lights, light attenuation using bump mapping?

Do anyone know if ARB or EXT extensions is the most usual (currently I only support the EXT one). I saw a post that the ARB verstion is somewhat different on NV and ATI. Some scaling differences…

I tested my app att work (on a strange intel (brrr) gfx card. And it turned out to just suppor the ARB version…

Regards!
/hObbE

Looks cool ! We clearly see the per-pixel lighting effect, and the bumps help alot to determine the light location into the cube.

Colored lights are easily done in multipass, and can be done in single pass depending on how you use the texturing stages and, obviously, how many texture units your card support.

Multiple lights are logically done in multi pass. Though if you have a very limited number of lights (say, 2 or 3) and if your card support a high number of texture units (say, 4 at the very least) and if you want optimal performance, then sometimes you can do it in a single pass. Anyway I don’t recommend to do so because it tends to make the graphics engine alot less flexible. One may say : “just keep this in mind for critical cases”.

There are many techniques for light attenuation, depending on which hardware you have and what kind of attenuation you look for (linear or quadratic). And to be honest I haven’t implemented any of them yet.

About extensions, that’s simple : always use ARB if possible.
Otherwise use whatever you like. EXT once was a kind of standard amongst extensions, but today only ARB is.
If you use other extensions, you just have to keep in mind that it’s not meant to be supported by all hardware, even though some (rare) vendor-specific extensions are widely supported and some of them even became part of a later OpenGL A-Spec (I’m especially thinking of the NV_blend_square extension that is even supported by ATI hardware, and is now included into OpenGL1.4 specifications).