Texture Matrix Performance (bumpmapping)

I’m useing tangent-space normalmaps (bumpmaps).
The tangent-space light-vectors are passed as 3d texture coordinates so I can use a normalization cubemap.
I could calculate the light-vector in object-space for each vertex.
But even if two polygons share a vertex the tangent-space light-vector of this vertex may be different, depending on the tangent-space matrix of this polygon.
So I could use a vertex array for the vertex positions and use immediate mode for the tangent-space light-vectors.
But it would be nicer to store the object-space light-vectors in the vertex array as well.
If doing so, I have to change the texture matrix for each polygon to the polygons tangent-space matrix to convert the object-space light-vectors from the vertex array to tangent-space.
My question is: Is the texture matrix or the state change performance high enough to change it each polygon when useing > 10000 polygons ???

[This message has been edited by Liquid (edited 08-14-2002).]

[This message has been edited by Liquid (edited 08-14-2002).]

If you’re not using vertex programs/shaders to compute the tangent-space light vector, then you’re going to have to compute them on the CPU. You can’t do what you’re thinking (which, btw, means changing the texture matrix per-vertex) with the texture matrix.

The object-space light-vectors are calculated by the cpu and accessed via the vertex array.
I didn’t want to change the texture matrix per vertex.
I know that this isn’t possible.
I want to change it per polygon so that the light-vectors are transformed to tangent-space.
So the object- to tangent-space matrix is equal for all vertices of one polygon but not for two polygons!
The question is how fast is it to change the texture matrix for each polygon you draw?

Umm…Lets Say… Slow… Actually it would be super slow if you are to change tyhe matrix for every polygon you draw.


So calculating the object-space light-vectors per vertex and transforming it to the polygon’s tangent-space in cpu and then useing immediate mode to pass it to gl is faster?
I think immediate mode isn’t the fastest.
I could use unique vertices per polygon and store them with their tangent(!)-space light-vectors in a vertex array.
So I didn’t have to perform a transform per polygon but I would lose HW-TnL performance because there are no shared vertices any more.
Isn’t there a better way without useing vertex programs?

To use tangent space, you need to do per-vertex math. This is done either on the CPU or in a vertex program/shader.

Note that even if you do math on the CPU, you’re better off doing the math on the CPU, streaming into an output buffer, and then submitting the entire buffer using Draw{Range}Elements. Using immediate mode causes too much calling overhead unless you’re REALLY low-poly.

Liquid, unless your geometry is completely faceted, you’ll still have shared vertices. And, if it was faceted, you wouldn’t have shared vertices anyway.

Yes, you will need to compute the tangent-space vectors on the CPU. Unless you have shaders, OpenGL can’t compute them for you.

But, jwatte is right about putting them in a vertex array when you compute them. It’s much faster that way.

perhaps look into using object space bumpmapping (very cheap to do but has other drawbacks though eg u need unique texturing)

I think tangent-space is better for world-models because of several reasons but you are right, with object-space you don’t have this problems, but I don’t want to use it.

jwatte, Korval:
I’ve mentioned what both of you said.
If you calculate the tangent-space light-vectors on cpu and store them in a vertex array this might reduce API calls in contrast to several glTexCoord calls.

If two polygons share one vertex the object-space light-vectors of the shared vertex are equal for both polygons, ok.
BUT, even if it’s the same vertex the tangent-space light-vectors are different for poly1 and poly2 because the tangent-space matrices of the polygons aren’t equal.

Because you only could store one identical texcoord per vertex no matter from which polygon it is accessed(by index), so you must have the same tangent-space light-vector for one vertex the hole time (within one glDrawElements call).

So I have to make a new vertex per polygon with the accurate tangent-space light-vector out of each vertex.
If I do that I will lose HW-TnL performance because storing post-HW-translation vertex coordinates will not be a benefit.
And this is the problem with storing tangent-space light-vectors in a vertex array!

[This message has been edited by Liquid (edited 08-15-2002).]

I can’t use one index for the vertex position and another index for the texcoord(tangent-space light-vector) for the same vertex, can I ???
If there is a way to do this, it would help a lot.
But I don’t know any extension that supports this!

Just use the same approach you use for normals, sum em up and normalize. It isn’t accurate at all but as far as I can tell, it works fine. I think this is covered in docs on per pixel lighting at nvidias dev site.

Don’t believe in papers too much.
Thinks (like the one you mentioned) are only true if several conditions are true.
Later more.

What you mean is interpolating(smoothing) the tangent-space axes-vectors for a vertex?
If so, you are right in one way.
This is a way to smooth edges and also to make the tangent-space light-vector be valid for all polygons because you don’t use the polygon’s tangent-space matrix any more but the vertex’s one, which has the smoothed normal as the tangent-space z-axis.

But consider a case where two polygons share one vertex.
Please ignore that up,down,left,right are no correct directions in object-space(so in 3D):
Poly1’s TS(tangent-space) x-axis is right and the TS y-axis is up.
Poly2’s TS x-axis is left and the TS y-axis is down.
So how do you smooth this?
The resulting matrix would be all zeros!
So even if this is not the case the resulting matrix could be wrong.

Another point is that it would not work because the normals in the bumpmap are stored in the polygon’s tangent-space not in the smoothed vertex’s one but the difference could cause the effect of smoothing!?

harsman, you are right that there are cases where one vertex doesn’t have to be split into two and that the vertex’s TS light-vector is valid for more than one polygon.
But this is only the case if all polygons that share this vertex are smoothed with eachother and that their tangent-space matrices(xyz-axes) could be smoothed too(e.i. no opposit directions like in my example).

[This message has been edited by Liquid (edited 08-15-2002).]

Yes, that’s correct. Models (specifically the models tex coords) need to be well bahaved for this to work. Just as you said, artists can’t mirror a texture over two polygons (e.g. only half of a characters face is stored in the texture map and then mirrorred using UVs). Tough, but that’s the only way you’ll get acceptable performance out of tangent space bumpmapping. If you have lots of broken meshes you want to use, check out NVMeshmender at nvidias dev site.

Thanks. You’ve helped me to proove my thoughts.
I think I’ll try to calculate the TS light-vector for each vertex and store it in a vertex array.
I will break vertices with the same position but a different normal/tangent-matrix in two vertices in a pre-processing step or in the level editor.
Because most of my edges are smooth, I think this will be ok.

But one thing would be nice:
To have the possibility to use different indices for the position and the texcoords of one vertex.
Because two vertices with the same position but different texcoords must be translated twice by gl :frowning:

On most moderately high poly models, having separate indices probably isn’t a win. You might gain some transform cost (if the hardware can handle separate indices) but you have to transfer a lot more indices. The cost of transfering these extra indices is probably higher then the cost of duplicating a few vertices.

Maybe you are right.


Another thing that you can do is to compute normal maps not in the tangent space, but in the object space. Procs and cons of this approach:

  • you don’t need vertex shaders if you use cubemaps (usual texture matrix transform is sufficient).
  • less space is required for VB (you don’t need to store TBN basis).
  • you have to create a normal map for each object (even if they use the same normal perturbation map, geometry will be different and thus normals will be different). It can be messy since you can create additional texture coordinate discontinuities in model (thus you have to duplicate vertices etc). But if your model uses “skin-like” texture and 1:1 mapping from texture space to surface space exists, you can reuse texture coordinates of your base texture.
  • it is complicated to use spot lights in object-space lighting scheme like this.

Hope it helps. Sorry for my English.

[This message has been edited by h2 (edited 08-16-2002).]

Thanks, h2.
But I’ve also read all the papers on the nvidia developer page.
I still think tangent-space will be better for a whole level, even if object-space has some advantages.
I think spotlights are as easy as in the tangent-space method.
Just use a cubemap and multiply everything with it.
It doesn’t matter if the bumpmap is in tangent- or object-space.

Of course spot lights are not easy in tangent space as well. I mean the case when you have non-uniform scaling.

[This message has been edited by h2 (edited 08-19-2002).]