[QUOTE=wenhaug;1291421]Questions:

- The rendering volume is limited: x is [-1,1], y is [-1,1], z is [0,1].

[/QUOTE]

If you’re talking about NDC (normalized device coordinates), then no. The rendering volume lies with -1…1 in x, y, and z (by default, though that is configurable). In your above quote, you just have z going from 0…1.

How does the cube fit into this volume? 2x2x2 is more than the rendering volume, but in the window it is rendered smaller and fits good.

[noparse]vmath::perspective()[/noparse] creates a perspective projection, like the old gluPerspective. It basically slices out a rectangular cone of space (a “frustum”) from the eyepoint, stretching that region as necessary to reshape it to fit within the rectangular framebuffer.

Take a look at the transform on that gluPerspective ref page. As you can see, there are two kinds of XY scaling going on here:

[ol]

[li]The first is explicit scale factors on X and Y, both functions of “f = tan^-1(fovy/2)” in the (0,0) and (1,1) cells. These just rescale X and Y so that at the near clip plane, exactly fovy degrees in Y and (aspect*fovy) degrees in X fit within the clipping volume. [/li][li]The 2nd type of scaling is a bit more subtle. Look at the 3rd column in the 4th row. See that -1? That effectively makes clip_space.W == -EYE_SPACE.Z. Then when the perspective divide comes along, CLIP-SPACE X,Y, and Z are divided by that W value. This scales CLIP-SPACE X,Y, and Z by -1/EYE_SPACE.Z. And this gives you the perspective foreshortening where XY distances on the screen become smaller-and-smaller the further the points are into the scene (i.e. the bigger their EYE-SPACE.Z value). [/li][/ol]

For the more general perspective projection transform which allows off-axis (assymmetric) perspective projections, see OpenGL Projection Matrix (Ahn).

- mv_matrix in my example does only the positioning of the cube in the world coordinates, doesn’t it? I.e. from the local coordinates to the world coordinates?

No. It positions the cube in “EYE-SPACE” coordinates (though it’s not a cube in this space…that comes later, in NDC-SPACE). See the diagram at the top of this page: OpenGL Transformation (Ahn).

- proj_matrix: NEAR_PERSPECTIVE is 0.1f, FAR_PERSPECTIVE is 1000.0f. This values limit the “visibility zone”.

But, agan, how 1000.0f is connected with the rendering volume of x is [-1,1], y is [-1,1], z is [0,1]?

Recall that the PROJECTION transform transforms points/vectors from EYE-SPACE -to- CLIP-SPACE. And to transform CLIP-SPACE -to- NDC-SPACE, you do the perspective divide (i.e. divide CLIP-SPACE.XYZ by CLIP-SPACE.W).

The NEAR_PERSPECTIVE and FAR_PERSPECTIVE are the absolute values of the Z clip-planes as defined in EYE-SPACE. In other words, Z = -NEAR_PERSPECTIVE is the near clip plane in EYE-SPACE, and similarly for far). They’re used as terms in the projection transform.

Look back at that perspective projection transform on the gluPerspective. In that matrix, they’re called “nNear” and “zFar”, respectively (or on the OpenGL Projection Matrix page, they’re called “n” and “f”, respectively). As you can see, these terms are used to compute the CLIP-SPACE Z values. After the perspective divide (by CLIP-SPACE.W), it’s trivial to see they’re a factor in NDC.Z. NDC.Z values must be in [-1…1] to lie within the view frustum. So (in summary) those NEAR_PERSPECTIVE and FAR_PERSPECTIVE (eye-space depth value) terms you pass into [noparse]vmath::perpective()[/noparse] to build the perspective projection transform end up being used to scale and shift eye-space depth (Z) values such that the part of the scene we want to keep in the depth dimension is mapped within NDC Z values of [-1…1].

- proj_matrix: how do we know the position of the camera and its direction?

MODELVIEW transforms vertices to EYE-SPACE. In EYE-SPACE, the eyepoint is always at the origin (0,0,0) looking down the -Z axis with +X right and +Y up.

So it’s implicit in the MODELVIEW you provide.

It’s also implicit in the PROJECTION transform you provide, since PROJECTION transforms vertices “from” EYE-SPACE.

In truth, the GPU doesn’t need to know or care what your EYE-SPACE is. In fact, you can multiply your MODELVIEW and PROJECTION transforms together on the CPU and just provide one composite transform (MVP) to the shaders to transform positions directly from OBJECT-SPACE to CLIP-SPACE. The only thing the GPU really “needs” to know for your vertices is CLIP-SPACE positions. That’s what the vertex shader computes.

It’s more likely you the developer actually care about EYE-SPACE and where the eyepoint is, because for instance you might end up doing lighting calculations in the shader in that space (which is typical).

how do we know if the camera’s top is pointing up? How to change it?

Pointing up in WORLD-SPACE you mean? This is defined by your VIEWING transform, which you’ve combined into MODELVIEW. The VIEWING transform transforms points/vectors from WORLD-SPACE to EYE-SPACE.

Like EYE-SPACE, the GPU doesn’t need to know or care about your WORLD-SPACE.

how do we know where the camera is pointing to? How to change it?

Again this is defined by your VIEWING transform.