Texture Matrix vs. Modelview Matrix

Hi, I’m currently implementing volumetric projection using 3d textures. The method is very similar to how dorbie does it. If you like to know how it’s done - altho I don’t think it’s necessary to know the method you answer my question - just search for posts that I posted. Dorbie obviously helped me a lot.

What I need is given a modelview matrix (or a series of scaling, a rotation, and translations of a identity matrix), an eye-space equivalent texture matrix.

Does that make sense? What is the relationship between the texture matrix and the modelview matrix as far as eye-space is concerned? I couldn’t really find any info; can you direct me to some sources that explain such?

It’d be preferable if I could just glGet the current modelview matrix -> process it -> then load that to the texture matrix. One pipeline stall per frame isn’t gonna hurt my code, or will it?

Originally posted by PhilY:
What I need is given a modelview matrix (or a series of scaling, a rotation, and translations of a identity matrix), an eye-space equivalent texture matrix.

If you want to emulate the EYE_LINEAR function yourself and using the texture matrix (althought I dont see the point of doing this, might as well send the computed tex coords yourself.)

you need the plane equation for your texGen function: (A, B, C, D)

then you need the invertex modelview : M’

Compute eye space plane equation :

(A’, B’, C’, D’) = P (M’)

Then compute the eye space vertices :

(x’, y’, z’, w’) = M (x, y, z, w)

Send those tranformed vertices to GL as texture coord. I guess you need glTexCoord3f

Now for the final step, setting up the texture matrix :

[A’ B’ C’ D’]
[A’ B’ C’ D’]
[A’ B’ C’ D’]
[0 0 0 1]

and you are set!

[This message has been edited by V-man (edited 02-07-2003).]


What exactly is the effect you want to achieve? Moving and rotating the volume just as if it was a polygon object? For that you can use GL_OBJECT_LINEAR texgen, then the texture coordinates are generated in object space.


Thanks to both of you for replying.

Ilkka, that’s exactly what I’m looking to do. I need to have the texture coordinates be equivalent to the polygon equivalent so that when I apply clipping planes that have been rotated, scaled, and transformed via the modelview matrix, I will see the correct texture rendering.

I’ve also posted this: http://www.opengl.org/discussion_boards/ubb/Forum2/HTML/011756.html It’s pretty much me b!tching and moaning… That may put a little more light into what I’m trying to do.

Thanks again.


In that case the object linear texgen should really do it. It’s just a matter of replacing

TexGenfv(GL_S, GL_EYE_PLANE, plane);


TexGenfv(GL_S, GL_OBJECT_PLANE, plane);

Then apply all your transformations to the modelview matrix, and draw your geometry as usual. Now the texture coordinates depend on the values you give with glVertex commands, regardless of the transformations in the modelview matrix. No need for texture matrix. In general, you don’t need the texture matrix when using texgen, you can always pre-transform the eye/object planes.



Here is exactly what I’m trying to do:

  1. Apply rotation,scaling, and translation to the MODELVIEW matrix and define clipping planes.

  2. Apply the equivalent rotations/translations/scalings to the texture matrix.

  3. Then, load identity matrix to the MODELVIEW matrix and render a series of GL_QUADS that are orthogonal to the current view, while keeping the 3D texture and clipping planes rotated, translated and scaled.

The result is a bunch of GL_QUADS that are textured with rotated 3D texture and clipping planes that bound the 3D texture. As far as the user is concerned, the 3D, volumetric stuff is rotating; where in reality, the texture and clipping planes are rotating while the actual geometry stays stationary.

Am I making any sense? Further suggestions?


that’s a crystal clear explanation.

But what’s the problem? Everything was well explained by Justhanging.


The texture coordinates are generated according to the vertices in the GL_OBJECT_LINEAR mode, correct? So that’d mean in order for me to generate the correct texture coordinates (so that the 3D texture appears to be rotating), I’d have to manually create rotated vertices and render them with the opposite rotation so that the QUADS will always be facing the viewer. Ilkka, am I understanding that correctly?

Or do you mean the texture coordinates (in GL_OBJECT_LINEAR mode) are generated according to the modelview matrix? If that’s the case, then I cannot render the QUADS always facing the viewer; because then the texture will be stationary. I can render the QUADS with the rotated modelview matrix; however, if the rotation becomes 90 degrees, the viewer will only see vertical/horizonal lines.

Thank you Ilkka and V-man for your continuing help.

I don’t know if this might help but…


No you don’t have to rotate the quads. The texture coordinates are generated according to the vertices in object space and also according to the plane equation you supply via glTexGen. The texture matrix comes into play as well.

float planeequationS[4]={a, b, c, d};
float planeequationT[4]={a, b, c, d};
float planeequationR[4]={a, b, c, d};
glTexGenfv(GL_S, GL_OBJECT_PLANE, planeequationS);
glTexGenfv(GL_T, GL_OBJECT_PLANE, planeequationT);
glTexGenfv(GL_R, GL_OBJECT_PLANE, planeequationR);
glEnable(GL_TEXTURE_GEN_R); //make sure you are doing this

You can supply your own plane equation to rotate the texture around or you use the texture matrix. Use both if you dare. Plus the plane equation for each coordinate can be different.

PS: you have to specify the plane equation for R I think, otherwise trouble!


I did think about rotating the plane equation… however, I still need to scale and translate the texture. I don’t think I can do that with plane equations alone.

Here is why I wanted to know the eye-space equivalent texture matrix (I guess object-space would work too): The project I’m working on already has software renderer and 2D texture renderer. The display is set depending on many different user settings (zoom, perspective, etc) and the nature of the volumetric data (distance between the voxels, etc).

I tried to match all of the glTranslate, glScale, and glMultMatrix calls being applied to the modelview matrix but could not figure out exactly how to match them up so that they would look identical. Additionally, the texture matrix also needed one more glScale call because the data aren’t always of dimensions of power of 2. So I figured I’d just find an easy way to figure out the modelview matrix equivalent texture matrix at every rendering call and just apply that to the texture matrix, then just scale it according to the texture’s dimensions.

So here is a typical scenario:

texture size: 120, 120, 40 (tsx, tsy, tsz)

glTranslatef(0, 0, 2.5);
glMultMatrix(m);, where m is a matrix with only rotation
glScalef(.9, .8, .7);
glScalef(1, 1, 1.4);
glTranslatef(-.5, -.5, -.5);

or generally:
glTranslatef(t1x, t1y, t1z);
glMultMatrix(m);, where m is a matrix with only rotation
glScalef(s1x, s1y, s1z);
glScalef(s2x, s2y, s2z);
glTranslatef(t2x, t2y, t2z);

so what should I do after glMatrixMode(GL_TEXTURE) call?

Thanks again. Thanks, Ysaneya, for the link as well.

Any more inputs?

You could use object space texgen, and specify the planes in object space. You can transform the plane vertice into object space by multiplying with the inverse modelview matrix.

It’s also possible that the inverse modelview matrix is what you want to load into the texture matrix if you want to do things the way you propose, but I’m not sure about that. I’d just try it out and see, and if it doesn’t work, use object space.

Inverting an affine matrix: http://www.cs.unc.edu/~gotz/code/affinverse.html