How do I do this in OpenGL? If the camera position is x,y,z, what do I do to the texture matrix to make my cube map move? (I am using GL_REFLECTION_MAP).

-Most explanations I read say to rotate the texture matrix by the camera rotation. This makes no sense. The reflection should only change with position, not camera angle.

-I am not using a modelview matrix, really, because I am rendering static brush geometry, i.e. the BSP walls and floor of a map.

-Play the “Chrome” map in UT2004 and look at the walls. THAT’s what I want. Or just download the demo above.

Assume you have two matrices; one is the transformation matrix for a given object that is to be rendered, another is the modelview matrix as it exists when you will be doing your rendering. (i.e. the modelview matrix PRIOR to multiplying the object transformation on the stack)

If you think of the modelview matrix as indicating where your “camera” is, you can explode it like this:

[VX] [VY] [VZ] -[CP]
m3 m7 m11 m15

Where VX, VY, VZ and CP are vectors that describe the orientation of the <u>view</u> X, Y and Z axes, and the camera position in world space, respectively.

If you drop CP (or, more accurately, set it to zero) then you have the basis for a transformed coordinate system at any point (in world coordinates) along the line from <cameratarget.x, cameratarget.y, cameratarget.z> to <camera.x, camera.y, camera.z>

Transform vertex normals to worldspace by multiplying them by the upper 3x3 matrix of the object transformation matrix, and then multiply by the 3x3 matrix extracted from the existing modelview matrix. Now you have a vector in world space that describes the normal as it is oriented for this object orientation, and this camera position. If you assume that your cubemap is aligned with the world axes, that is [[i], [J], [K]] you’re done; use that normal to get a texture coordinate as you always would, either through your own functionality or a texture coordinate generation function. Otherwise, it’s simply a matter of multiplying it one more time to put it in “cube map space” so to speak; for example if your cube map is for some reason rotated 45’ about the world Y axis, perform that transformation on the new normal.

Note that I like to keep everything in world space, although lots and lots of people like to do normals and lights and all that stuff in view space.

The geometry I am testing is all global space, no rotation or anything. So I am starting with normals in world space.

…use that normal to get a texture coordinate as you always would, either through your own functionality or a texture coordinate generation function.
I am unclear on how to convert that data into useful cubemap texcoords, or how the camera position factors into it. How does a global normal, [nx,ny,nz] and a camera position [x,y,z] make a cube map texture coordinate [s,t,r]? Isn’t the point of enabling texture gen on S,R,T that I don’t have to update the UV’s myself? It was my understanding that I just had to use the GL_REFLECTION_MAP texgen mode, and tweak the texture matrix somehow.

Yes, the point of tex coord generation IS so you don’t have to use your own code to find the coordinates, but you didn’t specify whether you were using plain straight OGL, the basic cube map extension, vertex and fragment programs, so I tried to hit all the bases

Luminous, consider that the modelview matrix, in this case, is just a plain identity matrix. No rotations or translations have been used. So as expected, implementing your code did not result in cubemaps that move when the camera position changes.

Just to make sure, I logged the model view matrix each time this function ran. It was always 0.0 for each parameter. So I am taking that to mean this has nothing to do with the camera position, as I thought. I do not see how getting the modelview matrix can have anything to do with camera position.

The only way I am getting cube maps that are remotely similar to the demo I posted is by translating the texture matrix the opposite of the camera position (multiplying each dimension by 1000, no idea why), and setting the normal for each vertex equal to the camera position (normalized).

In pseudo-code:

gltranslatef -camerax1000,-cameray1000,-cameraz*1000
glnormal3f camerax,cameray,cameraz // except normalized, of course.

yes I saw the effect in UT2k4 (DM-Icetomb/DM-Rustatorium/DM-1on1-Idoma/…)
and that’s exactly what I HAVE DONE.

I don’t know how you setup your camera but the code above make the cube map
“world aligned” by setting the rotation values of the texture matrix to
the inverse one of the modelview matrix.

So what I do is something like this:

glMatrixMode( GL_MODELVIEW )
glLoadIdentity()

Set New Camera Pos/Rot

Set ENV COMBINE
Get the inverse and multiply it to the Texture Matrix
to get a Cube map Texture Matrix (see above)
Render Static Geometry

Texture Coordinate Generation Modes for Cube Map Textures
…
…
…
OpenGL’s texture matrix is also very useful for manipulating cube map texture coordinates. The

texture matrix can be used to rotate an (s,t,r) vector from one space to another. For example,

if your cube map texture is oriented in world coordinate space, and M is the matrix transform

that moves from world coordinates to eye coordinates, you can load the inverse of the affine

portion of M into the texture matrix to rotate the eye-space reflection or normal vectors

generated by GL_REFLECTION_MAP_EXT or GL_NORMAL_MAP_EXT back into world space.

Set New Camera Pos/Rot
Shouldn’t you be setting the camera orientation with GL_PERSPECTIVE, not GL_MODELVIEW? Which is our point…the modelview matrix has nothing to do with the camera position.

I don’t understand why comments suddenly stopped when I pointed out that the modelview matrix is not used in camera positioning, but rather the perspective matrix. Can someone address this?

With my own code the reflection maps on spheres worked until I moved the camera location (although looking around was fine) - as soon as I plugged in your texture matrix code it all worked perfectly!