I’ve been all over the internet for two days trying to figure out how to transform a vertex position in GLSL into world space, for the purpose of determining a cube map texcoord/vector. The example in the Orange Book doesn’t seem to handle camera orientation. I tried saving the modelview matrix after setting up the camera, transposing it, then passing that to GLSL in the texture matrix, then multiplying the vertex position by that after multiplying it by the modelview matrix, then subtracting the camera position (which I pass in as a uniform vec3). However, I can’t get data feedback very easily from GLSL, so I am working blindly, until I find the right combination where it just works. Please help!

Instead of uploading the inverse of view matrix you can simply upload model matrix and use that on the original vertex (gl_Vertex).

Something like:

```
// Setup the view matrix.
glMatrixMode( GL_MODELVIEW ) ;
glLoadIdentity() ;
gluLookAt( .. )
// Combine model matrix with the view matrix.
glRotate( foo1 )
glTranslate( foo2 )
// Construct separate model matrix for the shader.
glMatrixMode( GL_TEXTURE ) ;
glLoadIdentity() ;
glRotate( foo1 ) .. same as above
glTranslate( foo2 ) .. same as above
glMatrixMode( GL_MODELVIEW ) ;
```

or

```
// Setup the model matrix for shader.
glMatrixMode( GL_TEXTURE ) ;
glLoadIdentity() ;
glRotate( foo1 ) .. same as above
glTranslate( foo2 ) .. same as above
glGetFloatv( GL_TEXTURE_MATRIX, model_matrix ) ;
// Setup the full modelview matrix.
glMatrixMode( GL_MODELVIEW ) ;
glLoadIdentity() ;
gluLookAt( .. )
glMultMatrix( model_matrix ) ;
```

Is there any way to pass a matrix to the shader as a variable?

Do I multiply the vertex position by gl_ModelViewMatrix or gl_ModelViewMatrixInverse? Do I multiply the result of that by the matrix I calculate using just the rotation, position, and scale, or do I use the transpose of that matrix? If I use the transpose, do I leave the position matrix values where they are and reverse them, or do I put them on the right-side column?

There are a lot of places where this could go wrong, and I am programming blindly with GLSL. Do you have some working code?

Sonofabitch!

I found out why it isn’t working. OpenGL ignores the position and scale when multiplying the matrix. It works fine on objects positioned at (0,0,0), but the tranformation doesn’t consider position, so if an object is anywhere but the origin, it doesn’t appear right.

If I can extract the position from the gl_ModelView or gl_ModelViewInverse matrix, I can add that to the transformed position, either before or after multiplying it by the matrix (will have to experiment).

So my question now is, how do I extract a position from a matrix in GLSL? Is it just x=gl_ModelViewMatrix[3][0], y=gl_ModelViewMatrix[3][1],z=gl_ModelViewMatrix[3][2] or what?

I got it!

gl_ModelViewMatrix and gl_ModelViewMatrixInverse only contain the rotation. They are not the true matrices, and do not contain the scale or the position!

You have to pass two matrices into the shader. I chose to use unused texture matrices. After applying your camera transformations, save the matrix. You will reuse this throughout your rendering routine. Set this matrix on GL_TEXTURE3.

After all object transformations are applied, just before you draw a surface, get the ModelView matrix and load it into the texture matrix for GL_TEXTURE2.

Now use this code to transform the vertex and calculate the vertex position in world space:

```
//Cubemap
vec4 pos;
//Extract the scale from the matrix
vec3 scalex=vec3(gl_TextureMatrix[2][0][0],gl_TextureMatrix[2][0][1],gl_TextureMatrix[2][0][2]);
vec3 scaley=vec3(gl_TextureMatrix[2][1][0],gl_TextureMatrix[2][1][1],gl_TextureMatrix[2][1][2]);
vec3 scalez=vec3(gl_TextureMatrix[2][2][0],gl_TextureMatrix[2][2][1],gl_TextureMatrix[2][2][2]);
vec3 scale=vec3(length(scalex),length(scaley),length(scalez));
//Transform vertex to eye space
pos=gl_Vertex;
pos.x=pos.x*scale.x*scale.x;
pos.y=pos.y*scale.y*scale.y;
pos.z=pos.z*scale.z*scale.z;
pos=pos*gl_ModelViewMatrixInverse;
pos.x=pos.x+gl_TextureMatrix[2][3][0];
pos.y=pos.y+gl_TextureMatrix[2][3][1];
pos.z=pos.z+gl_TextureMatrix[2][3][2];
//Transform eye space to world space
pos.x=pos.x-gl_TextureMatrix[3][3][0];
pos.y=pos.y-gl_TextureMatrix[3][3][1];
pos.z=pos.z-gl_TextureMatrix[3][3][2];
pos=pos*gl_TextureMatrix[3];
ReflectDir=vec3(pos.x,pos.y,pos.z)-CameraPosition;
```

And finally, since this thread will probably be read by a lot of users, visit www.leadwerks.com for all your 3D needs!

Originally posted by halo:

I found out why it isn’t working. OpenGL ignores the position and scale when multiplying the matrix.

The multiplication of matrix with vector applies both translation and scale. It is simply four dot products so it does not care what transformation the matrix represents. If the .w coordinate of vector used for the multiplication is zero, the translation will end as zero too.

gl_ModelViewMatrix and gl_ModelViewMatrixInverse only contain the rotation. They are not the true matrices, and do not contain the scale or the position!

They are full matrices, something different was wrong in your application.

Whatever dude, no one else could produce working code. I’ve been using GLSL for four days, and I had to figure this out myself because there isn’t a single example on the internet that works.

I think the correct order for multiplying a vector and a matrix is

pos = gl_ModelViewMatrixInverse * pos;

I haven’t seen this explained anywhere, but I think doing

pos=pos*gl_ModelViewMatrixInverse;

will multiply the vector by the transpose of the matrix. This might be part of your problem since it would effectively be the inverse of the rotation matrix with no translation.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.