Another bump mapping question

Hello again coders! Alright, heres the lowdown. I thought I came up with this perfect per-vertex bump-mapping algorithm, but it just won’t work! You can see some screen shots at (a page in my site).
Can any one help? I’ve commented each main piece of code as best as I could, so hopefully the code explains itself. Any insight would be most appreciated.

void Mesh::GenST( Decimal *ptrM, Decimal *notused, Decimal *lpES, Decimal *notused2, float lumens)
float *tri = &this->strip[0]; // Stores vertices
float *array = &this->nmapVert[0]; // Stores texture coordinates for normal map
float *nrmlOut = &this->nMap[0]; // Normal to vertex in object space
float mag;

float lvES[4];	// Light position in object space

float lvTS[3];	// Light vector in Tangent Space
float lvOS[3];	// Light vector in Object Space

float inverted[3];

//	Variabes used in rotating nrml vector to object space
float COS, SIN, axis[3], vmat[16];
unsigned long max = this->endt * 2 * 3;



FLOAT txx;
FLOAT tyy;
FLOAT txy;


    //Vertex in eye space
FLOAT test[3];

    //Distance from light to plane in eye space
float dstOut;
float negd;
float rsq;

float nES[3];

for (this->t = this->begt; this->t < max; ++t, array += 2, tri += 3, nrmlOut += 3)
	//	Transform vertex from object space to eye space
	test[0] = ptrM[0] * tri[0] + ptrM[4] * tri[1] + ptrM[8]  * tri[2] + ptrM[12];
	test[1] = ptrM[1] * tri[0] + ptrM[5] * tri[1] + ptrM[9]  * tri[2] + ptrM[13];
	test[2] = ptrM[2] * tri[0] + ptrM[6] * tri[1] + ptrM[10] * tri[2] + ptrM[14];

	//	Transform normal out to eyespace
	nES[0] = ptrM[0] * nrmlOut[0] + ptrM[4] * nrmlOut[1] + ptrM[8]  * nrmlOut[2];
	nES[1] = ptrM[1] * nrmlOut[0] + ptrM[5] * nrmlOut[1] + ptrM[9]  * nrmlOut[2];
	nES[2] = ptrM[2] * nrmlOut[0] + ptrM[6] * nrmlOut[1] + ptrM[10] * nrmlOut[2];

	negd =	nES[0] * test[0] +
			nES[1] * test[1] +
			nES[2] * test[2];

	dstOut =	nES[0] * lpES[0] +
				nES[1] * lpES[1] +
				nES[2] * lpES[2] - negd;

	rsq =	nES[0] * nES[0] +
			nES[1] * nES[1] +
			nES[2] * nES[2];
	if (rsq)
		rsq = sqrt(rsq);

	//	Find inverted translate vector
	InvertedTranslate(inverted, ptrM);

	//	Find light vector in eye space (vector from light to vertex)
	lvES[0] = lpES[0] - test[0];
	lvES[1] = lpES[1] - test[1];
	lvES[2] = lpES[2] - test[2];

	//	Transform lvES into object space
	lvOS[0] = ptrM[0] * lvES[0] + ptrM[1] * lvES[1] + ptrM[2] * lvES[2] + inverted[0];
	lvOS[1] = ptrM[4] * lvES[0] + ptrM[5] * lvES[1] + ptrM[6] * lvES[2] + inverted[1];
	lvOS[2] = ptrM[8] * lvES[0] + ptrM[9] * lvES[1] + ptrM[10] * lvES[2] + inverted[2];

            //I assume that if I rotate lvOS around vector a x b / |a x b|
            //(where b is <0, 0, 1> ) by arcsin(|a x b|) that the lvOS
            //vector would be in tangent space, is this right?

	//	No longer need these, provides for optimization since they should always equal zero
	//FLOAT tzz = t * axis[2] * axis[2];
	//FLOAT txz = tx * axis[2];		
	//FLOAT tyz = ty * axis[2];
	//FLOAT sz = SIN * axis[2];

	COS = nrmlOut[2];
	SIN = sqrt(nrmlOut[0] * nrmlOut[0] + nrmlOut[1] * nrmlOut[1]);

	axis[0] = nrmlOut[1];
	axis[1] = -nrmlOut[0];
	axis[2] = 0;

	if (SIN)
		axis[0] /= SIN;
		axis[1] /= SIN;
	t1 = 1.0f - COS;

	tx = t1 * axis[0];
	ty = t1 * axis[1];

	txx = tx * axis[0];
	tyy = ty * axis[1];
	txy = tx * axis[1];

	sx = SIN * axis[0];
	sy = SIN * axis[1];

	vmat[0] = txx + COS;	vmat[1] = txy;		vmat[2] = -sy;
	vmat[4] = txy;		vmat[5] = tyy + COS;	vmat[6] = sx;
	vmat[8] = sy;		vmat[9] = -sx;		vmat[10] = COS;

	lvTS[0] =	lvOS[0] * vmat[0] +
			lvOS[1] * vmat[4] +
			lvOS[2] * vmat[8];
	lvTS[1] =	lvOS[0] * vmat[1] +
			lvOS[1] * vmat[5] +
			lvOS[2] * vmat[9];
	lvTS[2] =	lvOS[0] * vmat[2] +
			lvOS[1] * vmat[6] +
			lvOS[2] * vmat[10];

	mag = 	lvTS[0] * lvTS[0] +
			lvTS[1] * lvTS[1] +
			lvTS[2] * lvTS[2];

	if (mag)
		mag = 1 / sqrt(mag);
		lvTS[0] *= mag;
		lvTS[1] *= mag;
		lvTS[2] *= mag;
	array[0] = lvTS[0] * rsq / (dstOut * lumens) + 0.5f;
	array[1] = lvTS[1] * rsq / (dstOut * lumens) + 0.5f;

[This message has been edited by yoale (edited 05-07-2003).]

[This message has been edited by yoale (edited 05-07-2003).]

hmmm… Well, for one, bump mapping should not be a per-vertex thing. To make realistic looking bump maps, you’ll probably have to use vertex/fragment programs.

If bumps were done per-vertex, the whole reason for bump mapping would be null&void, since bump maps are designed to reduce the number of faces while keeping a high level of detail.

I suggest you take a look at the plethora of documentation on Cg and other vertex/fragment programming documentation.

… or read my fragment level Phong illumination article:

It’s D3D though, but should be easy to transfer to OpenGL.

Not to insult anyone (I don’t like treating people like jerks ), but I already knew bump-mapping is done per pixel, you may have missed the whole point of my program. What I was trying to achieve was tagging a vertex with texture coordinates. These coordinates would then be looked up in a lightmap for a given color.

Given a vertex, the normal to the plane the vertex is on, and the light position, there should be a way to calculate a tangent space vector. The x and y coordinates of this vector should map into the texture map coordinates. The program works fine for flat surfaces (even taking into account light distance), but fails in curved surfaces (as noticed by the screen shots I posted). I know without a doubt that the normals I use for any given surface are perpendicular to the vertex’s plane. I hope that clarifies things.

Humus, thankx for your page, it was a pleasure to look through it. I found something interesting with your code, you use a uVec and vVec. I asssume you gather that data from neighboring vertices, don’t you? If not, how did you calculate them? I have an alg like yours that runs fine when neighboring vertices are known, but my new alg needs to calculate them on the fly with just a single vertex and the normal passed in. Thanx again!

Actually, I just load a .t3d file which provides the tangent and binormal (uVec & vVec) for free.

Humus, bad ass article, I am going to read it as soon as I get a chance… I like it…
Hey, did you start with ATI at last?


There is no way to get the tangent space vectors for an arbitrary single vertex, because an orientation only exists if you have at least three vertices.

Most bump-mapping implementations pre-calculate their tangent space u and v vectors (binormal and tangent) in the export process, when they also calculate normals, merge/split vertices based on texture coordiantes, etc.

If you can’t inspect nearby vertices, then you have to pre-calculate the data, or make some horrible assumptions such as “u” always growing in the object space X axis or something.

Originally posted by mancha:
Humus, bad ass article, I am going to read it as soon as I get a chance… I like it…
Hey, did you start with ATI at last?

Yup, been there for ten weeks, but am back home again now. There are picture from the time up on my site too. I have also been offered to come back when I’ve finished my studies.

(off the original topic, sorry)


Looked around at your site, and looks like some pretty cool stuff. I looked into your infinite terrain system because that is a major interest of mine. My tests currently do more octaves of perlin noise (I think you said you did two octaves, if I understood correctly), and I have begun to experiment with strange things like using one perlin noise function to modify parameters (like persistence) of another perlin noise system which is actually making the heightfield. This allows the types of terrain to vary over different regions, and to vary smoothly. I’m still trying to find some way to realistically generate cliffs/plateaus out of the same engine that generates my rolling hillsides. I think I can layer yet another perlin noise generator that could specify a “discretization level” for various regions, which would give me a way to get the sharp steps necessary for cliffs and such.

One thing that I have found elsewhere that I intend to do to my own code is to move the landscape generation into a vertex program. The nVidia FX browser includes such a demonstration with a two octave perlin noise function. The advantage is that you only ever “compile” one land tile into the video card (or maybe a few tiles at different LODs). You can render as many tiles as you want by shifting the modelview, storing a “TilePosition” global for the vertex program to offset coordinates with, and rendering your single tile. In theory, this should allow tremendously large landscapes, as long as your card can handle the vertex program load. Even if it doesn’t go any faster, it definitely saves on precious video card memory…

I intend to use perlin noise generation for a host of other issues related to the infinite terrain (tree/rock distribution, creature/building distribution, etc can all be approached with this technique, providing an infinite world on top of the infinite landscape), so I take considerable interest in your “perlin noise in a fragment program” system. I will definitely have to see if I can use that to speed up my noise functions.


YES! I’m glad you posted a reply jwatte I thought to myself though, how do I pass in the extra normal, which was making me scratch my head in confusion. So I requested that the next revision in OpenGL have a second normal to be processed with vertices. Here’s a reply I got.

Originally posted by Korval:
If you’re using vertex programs, you can already do this. You use attribute arrays. I forget what the precise ARB_vp term is, but that’s the NV_vp term for it. You can have 16 arbiturary attribute arrays.

If only I knew that from the beginning. Geeze, the shame…I feel like such a newbie

Anyway, I’m also interested Humus’ infinite terrain demo. So, thanks for everyone’s help

[This message has been edited by yoale (edited 05-09-2003).]

You may want to have a look at this doc:

This explain how to compute your tangent space vectors if you need to, and how to set up the OpenGL rendering for a bump mapping pass.