# Calculating projected area

I am not trying to render a scene, but I need to calculate projected areas of various surfaces of a given 3d model from an origin. I need access to the transformed virtices, so I can do the area calculations. Does anyone know of a simple / straight forward way of doing this? Thanks.
-Justin Frodsham
zeppelin@io.com

To be more specific, I want to define a surface (a quad in this case). Each of the 4 points have an x y z. I want to translate and/or rotate the surface defined by the points. I then want to find the vector angle, distance, and projected area to a another single point defined by an x,y,z. I have have trouble wading through 3d grapics examples, because they are so centric to drawing a scene. I just need the data.

Thanks in advance for any help,
Justin

Let me see if I understand this correctly… you have a point x,y,z that undergoes a tansformation matrix M. The result is x’,y’,z’. You want to know the values of x’,y’,z’?

If that’s the case, you should do some searching for linear algebra. Basically, you just have to multiply the matrix by the old vertex, and the result is the new vertex. If you have multiple transformations, they are all combined into a single matrix by multiplying the matrices together.

Originally posted by Deiussum:
[b]Let me see if I understand this correctly… you have a point x,y,z that undergoes a tansformation matrix M. The result is x’,y’,z’. You want to know the values of x’,y’,z’?

[/b]

I have a 3d object with multiple surfaces. I want to transform in x,y,z and/or rotate the whole object, and then calculate the projected area of those surfaces as seen from a single point (0,0,0).

Thanks,
Justin

zeppelin,

What kind of projection are you doing (Orthogonal or perspective)? If perspective, then you must define a plane to project onto, not a point (that will result in an area of 0.0 by definition )

In principle you can use the gluProject() function to calculate the <xyz>’ from <xyz>

Another approach could be to render the scene and analyse the result (I’ve heard about some extensions that can give you feedback on the amount of pixels written, but can’t think of their names right know – it’s an improvement by nVidia of the HP_OCCLUSION_TEST extension I believe).
You could also do a glReadPixels after drawing and analyse the picture yourself, depending on your performance needs of course!

HTH

Jean-Marc

Here’s the calculations. Note… I’m lousy with matrices so this will not be in matrix form.

First you’ll have to assign a local co-ordinate system for your object and your camera. Each will have a Centre (in Real World co-ordinates) and 3 perpendicular vectors representing the three axis of the local co-ordinate system (also specified in real world terms). The axis should be normalised. The camera will also need a view plane depth.
Pseudo Code

Camera {
Vector Centre, x_axis, y_axis, z_axis;
double ViewPlaneDepth;
}

Object {
Vector Centre, x_axis, y_axis, z_axis;
}

Now specify the points you want to transform relative to the object’s local co-ordinate system. This is the same thing as centreing it around (0, 0, 0). It’s just that it will be used as a relative co-ordinate.
Pseudo Code

``````Vector P
``````

Then to transform the point based on the camera you’ld do the following.
Pseudo Code

Vector Transform(Vector P) {
Vector Result = P.x * Object.x_axis + P.y * Object.y_axis + P.z * Object.z_axis
// Convert relative point to real world co-ordinates.
Vector VecToTarget = Result - Camera.Centre
// Get vector from camera to converted point.
Result = {dot(Camera.x_axis, VecToTarget), dot(Camera.y_axis, VecToTarget), dot(Camera.z_axis, VecToTarget)}
// Get point relative to camera.
Result.x = Result.x * Camera.ViewPlaneDepth / Result.z
Result.y = Result.y * Camera.ViewPlaneDepth / Result.z
// Project on to view plane.
return Result
// Return transformed point.
}

You can do whatever calculations you want using the (x, y) components of the transformed point (z gives transformed depth).

NOTE: The projection section of the code gives perspective projection (at least one form of it). If you want orthographic skip that step. Another form of perspective is to use…

Result.x = Result.x * Camera.ViewPlaneDepth / VecToTarget.Length
Result.y = Result.y * Camera.ViewPlaneDepth / VecToTarget.Length
// Project on to view plane.

I can’t say if OpenGL uses either to create their projection, or if there is another method. If I remember right older games used to use one of the two methods above or a mixture of the two. I think both methods lead to slight distortions.

PS. Does anyone know how to put this in a matrix form?

The problem is that when doing graphics, the points are typically projected back onto a 2d-plane (ie… the screen) I need the projected areas as viewed from the origin (say 0,0,0). The 2d-plane I need them to be projected on is located at the object and normal to the origin. Did I make sense? Another way to look at it is this: What are the angles and distance to each vertice from 0,0,0. After I do the transform and rotation.

Thanks for all the help,
Justin

I’m still not quite understanding what you want. You talk about projection planes, then say all you want is the distance and angle of the transformed vertices to the origin.

To get the transformed vertex, you just do this…

newVert = T * R * oldVert

Where T is your translation matrix, and R is your rotation matrix.
(Or was that newVert = oldVert * T * R, I never remember)

Now you have your new vert… say it’s 10,0,0 for example… distance from the origin is 10. Angle is 0 from the x-axis.

zeppelin,

OpenGL is a rendering library. It will not help you.

However, the math behind OpenGL is very similar to what you want to do. What you want to do is to project your 3d model onto a plane and then compute the area using the X and Y coordinates of the result. OpenGL does exactly the same thing (except for computing the area) to determine where vertices are drawn on the screen.