Ok I have what maybe a stupid question. I understand the concept behind using a normal map and lighting vectors (via glColor) with dot3 blending to get bump fx, however the normals in the normal map are only aligned properly when the surface the map is applied to is perpendicular to the vector (0,1,0). Am I using this the wrong way, is there a better way, am I even explaining this properly.

Originally posted by john_at_kbs_is:
[b]Ok I have what maybe a stupid question. I understand the concept behind using a normal map and lighting vectors (via glColor) with dot3 blending to get bump fx, however the normals in the normal map are only aligned properly when the surface the map is applied to is perpendicular to the vector (0,1,0). Am I using this the wrong way, is there a better way, am I even explaining this properly.

Thanks…

John.[/b]

The normals in the normal map are in tangent space (or texture space if you prefer). You must rotate your light vector into tangent space before doing the dot product. Search this forum for “tangent” space for a more detailed discription.

Ok, now its all making sense. I do have one further question:

All of the info I’ve read on converting to tangent space says to use the surface normal and the vector parallel to the u texture vector in object space. The documents also refer to these vectors as perpendicular to each other. This is not the case with my surface normals, because my normals are smoothed over the object.

Is this correct and if so, would anyone have a suggestion on how to get the tangent vector?

In the NVIDIA OpenGL SDK there is a function in one of their math headers that is called tangent_basis which will construct the orthonormal basis for generating the matrix to translate an object space vector into tangent space. They use it in the demo which shows simple bumpmapping with vertex programs, the model which is bumped mapped in some kind of a big dinosaur looking head.

There are a few presentations about bumpmaping and seting up tangent space in nvidia’s site. Check those out.

This site here: http://members.rogers.com/deseric/tangentspace.htm goes into some good detail about computing tangent space. It assumes you know how to do partial derivatives and stuff. But even if you dont, the end of the paper shows what to do, with the math already done.

Originally posted by john_at_kbs_is: This is not the case with my surface normals, because my normals are smoothed over the object.

I believe you’ll have to use surface normals, not vertex normals - it shouldn’t be much of a problem as you’ll have per-vertex lighting turned off, surely?
Of course, it also means you’ll have to duplicate identical vertices because they’ll now have different normals…

[This message has been edited by knackered (edited 04-11-2002).]

This is not the case with my surface normals, because my normals are smoothed over the object.

Whether you are using smooth normals or not, they will be ROUGHLY perpendicular to the direction of increasing u texture coordinates. I mean normals should point outward from the surface at each vertex and the texture coordinates should increase along the surface at each vertex.

To make them exactly perpendicular, just take the direction of increasing u and subtract from this vector anything along the direction of your normal.

I would not recommend going from vertex to face normals, whether you use per-vertex lighting or not. The vertex normals are still used in per-pixel lighting and they’re more accurate because they don’t change abruptly across edges like face normals do (and they contain 3 times more detailed about the surface, and they lend themselves better to vertex arrays).

All of the info I’ve read on converting to tangent space says to use the surface normal and the vector parallel to the u texture vector in object space. The documents also refer to these vectors as perpendicular to each other. This is not the case with my surface normals, because my normals are smoothed over the object.

That is correct. Your tangent points in the “u” direction, and your binormal points in the “v” direction.

Actually, there is one thing that is not correct: the tangent, normal, and binormals need not be perpendicular. In fact, making them perpendicular without changing the texture corridinates to make those perpendicular is incorrect.

If they are not perpendicular, then you will have to, after doing the tangent-space transform, renormalize the light vector.

Technically, even that may not be good enough. If you have a GeForce 3, you should send your normal, binormal, and tangent vectors as texture coordinates. Use the texture shaders to perform a matrix multiply operation between those normals and the light vector. As the last step in the shader, apply that light vector to a renormalizaiton cube map. Granted, you have used up all of your texture units, but you will have nearly pixel-perfect bump mapping. In lieu of the renormalization cube map, you can renormalize in the register combiners.

On a Radeon 8500, you have an easier time doing this. You can do both a diffuse and specular bump map in one pass, with room left over for a base texture and maybe even a detail map (perhaps with restrictions on how the detail map is textured to the polygon).

how about smoothing the tangent?!
if you read the papers you’ll see that they first create the tangentspace per face (as you do with the normal) and then simply sum them up for the vertices. that way you get a smoothed tangentspace…

there are several problems with this if the mesh has bad texcoords but you got the idea

smoothing the tangent vector wont work for all surfaces. My scenes have objects with multi-subobject maps on them and the faces can have their own text coord, being on the same object the normals still get smoothed together.

Korval:

vectors dont need to be perpendicular? excellent! that makes my life a lot easier. I’ll try it out. Thanks, man.

To All:

Using the smoothed normal is very important in calculating the bump properly on rounded surfaces. Other wise there will be edging.

Thanks all, this has answered all my questions. Hopefully someday I’ll get a website and I can share my screen shots with you guys.

I do have one problem, the bump map takes into account the dimming of the light due to angle to the surface and I know that I should not calculate this into my lightmaps. The problem is this: my light map is made up of several lights and my vectors I’m using for the bump map are averages of the vectors to the lights that light the surface. That math is like saying:

alaa + blba = (al + bl) * ((aa + ba) / 2)

where
al = first light amount
bl = second light amount
aa = first light angle
ba = second light angle

this is not correct…

So do I need to create a texture layer for each light? Or is there a shortcut?

You need to send in a light vector for each light, each of which uses the same bump map texture. Unfortunately, there’s only a limited number of available light vectors (i e primary and secondary color). Thus, you have to multi-pass, and accumulate each light in additive mode in the frame buffer, and modulate your color map last.

Luckily, this approach (one or two lights per pass) works very well with stencil shadows or shadow maps, so it’s not a huge drawback if you wanted to do those things, too.

That’s what I was afraid of. I’m going to try a few tricks to limit the texturing passes, one per light is a little rough. When you say secondary color do you mean the ‘EXT_secondary_color’ ext?