About the Fake Envirment Mapping in Q3?

As I know, it uses a simple
texture mapping.but changes the texture
coordinate while moving the eye position.
Am I right ? If so, How does Q3 caculate
the tex coord while running?(it does not
use the glGen …)

er… Did I ask too much ? :frowning:

I figure it probably uses one of the various environment mapping approximations based upon the relative position of the eye. For example, subtract environment mapped vertex vector from viewer location vector. Use s=kx/z and t=ky/z as the texture coordinates for the vertex, where k is some scale factor. I’ve also seen people skip the division. Then again, it might calculate real sphere mapping coordinates (its not hard at all). I suspect they decided to do all their own texture coordinate generation so they could be certain the cpu hit was minimized and independent of any vendor’s OpenGL implementation.

[This message has been edited by DFrey (edited 01-06-2001).]

Why not just use the texture matrix, it seems this is never touched by most but its very easy to use, you can apply all the transformation, rotations, scales, etc on it to have a texture appear to stay still while everything move thus looking like its a reflection.

You have to be extra careful when messing around with the texture matrix. It’s stack depth is generally very shallow, and varies from one OpenGL implementation to another.
From the snooping I’ve done, I see that Quake3 does not appear to transform the texture matrix (or has not in the past. I haven’t checked 1.27g). This appears to have been a QA decision. It is known that some vendors OpenGL pipeline stalls when texture matrix transformations are introduced. Plus the more code you defer to HV’s the more complex the product support becomes. So for texture transformations, something which is in itself easy to do, there is a very good reason to do it yourself.

[This message has been edited by DFrey (edited 01-07-2001).]

You are correct in saying that Q3 doesn’t use the texture matrix (yes, I’ve looked at the Quake 3 source code). But I don’t think the reason is so simple – after all, Q3 leaves transform to the driver, why not leave the texture matrix and texgen as well?

On HW T&L cards, after all, if you can do texgen and the texture matrix in HW, you can often not only reduce the amount of data sent to the HW per vertex (texcoords are computed from the vertex, rather than sent explicitly, for example), but also offload CPU work.

I think the real reason Q3 doesn’t use the texture matrix (and it could – and probably get a speedup) is that Carmack didn’t really design it in when thinking about the shaders. The same goes for HW lighting – Q3 does lighting computations in SW, and they look a lot like the kinds of lighting computations that OpenGL does, but not exactly. Carmack could have designed the engine to take advantage of a lot of these features, but he didn’t, because the HW was simply not available at the time.

On GF cards, I’d strongly recommend letting us do texgen and the texture matrix rather than doing it yourself. Not only does it save you writing some code, but we support all OpenGL texgen modes in HW.

  • Matt