I’m starting with some space-particioning algorithms, and I’m facing a simple geometrical problem - determining for a point is it or is not inside the truncated pyraimd of view.

What I have got, are the standard model-view matrices - so the camera position, and directions of axis vectors. Also the aspect, and the position of cutting planes.

So my idea was to create a vector from camera (‘C’) position to the point (‘P’), and then
check, if the angle between CP-> and the vector perpendicular to the screen is small enought.

So the theory, but in practice, I do not know how to implement it fast. Any tutorials, or better ideas for solution? Maybe some code?

Check your point’s distance to all six clip planes. Let’s say the normal vector of each plane faces outside the frustum. Now you have to look if every distance from plane i to the point is smaller than zero. (Edit: If so, it’s in the frustum.)

Of course I can use glFrustrum, but as I’m new to OpenGL, the idea of creating a prism from a truncated pyramid sounds odd. What are these vectors? Are they normals to walls of a prism, or a pyramid?

And how is it while using gluPerspective()?
gluPerspective is AFAIK equivalent to:

GLdouble left = -fovy * PI / 180.0) * aspect * zNear;
GLdouble right = -left;
GLdouble top = -fovy * PI / 180.0) * zNear;
GLdouble bottom = -top;
glFrustum(left, right, top, bottom, zNear, zFar);

So, you can still construct vectors. These vectors are normals of clip planes:
vec0 - left
vec1 - right
vec2 - top
vec3 - bottom
These normals face inside the frustum.
vec4 is a vector facing in the same direction the camera is facing - dot products with this vector will give the z-coordinate that you should compare to zNear and zFar.

Are these vectors normals to the clip planes of a pyramid or a prism?
I’ve answered that.
Ok, when words fail then use images:

This is a top view.
I’ll only explain left and right clipping.
left, right, zNear and zFar are parameters passed to glFrustum.

Just to avoid confusion: zNear and zFar are always positive - they’re just distances on the z-axis from observer.

As you can see:
vec0 = (zNear, 0.0, left)
vec1 = (-zNear, 0.0, -right)
vec4 = (0.0, 0.0, -1.0)

Example:
left = -1.0, right = 1.0, zNear = 2.0
So we get:
vec0 = (2.0, 0.0, -1.0)
vec1 = (-2.0, 0.0, -1.0)

If you rotate your camera then you must rotate all these vectors, too. You don’t need a matrix - just use rotation equations. This one rotates around z-axis:
vec’.x = vec.x * cos(a) + vec.y * sin(a);
vec’.y = vec.y * cos(a) - vec.x * sin(a);

When you move your camera you don’t need to do anything with these vectors - they’re just directions anyway.

To test if point is inside frustum calculate vector from camera to that point and test it’s dot products with vec0 - vec3.
If you compute dot product with vec4 then you’ll get the z-coordinate that you can zompare with zNear and zFar.
You can skip comparing with zNear if you want.

Ok, that all I wanted to know - now I believe I understand how those values describes the pyramid. Thanks.

Now, wouldn’t it be faster just to determine if the projection point lies in a plane, inside the triangle, using the area test? (YX and YZ planes)
And creating the projections would be childish easy.

Testing if point projected to triangle’s plane lies inside this triangle is more less as time consuming as the test I’ve described above. But you would have to perform two such triangle tests so it would be about 2 times slower.

How can I perform that checking, if the points (for testing) were just throwed into the screen using push/pop matrix technique (I never calculated their relativ-to-camera positions myself).

I understand that I should somehow transform the position of camera (sounds better than transforming the-whole-world instead )using the reversed modelview matrix, right?

So the question is - how to do it?

I have a root point A (before any transformations). Position and direction of camera, and of course - the world data (static), both in relation to point A. Since now I was rendering this way: translating and rotating with reversed camera settings, and then, simply throwing the world to pipeline. This time it simply wouldn’t work…