 # transforming viewport

Hello,
I need help to do the following.

I’m extending my volume renderer with the raycasting algorithm. Currently I have viewport aligned slicer algorithm implemented.
So far raycasting is implemented as well but with one exception
that i cant move inside the volume because the bounding box that
i use to create the entry vectors will be clipped as soon as the
box moves outside the near plane. I know, thats because this part is outside the frustum. So my question is, how can I transform the viewport , in this case the nearplane, to the object coordinates of the bounding box.

I know that I can get the viewport with glGetIntegerv(GL_VIEWPORT, …). The bounding box is from
(0,0,0) to (1,1,1). So the transformed nearplane/viewport has to be 3d coordinates within the range of the boundingbox.

As far as I know, I can transform with the inversed transposed modelviewmatrix to object space, right ? but
then I need to clamp the values to the bounding box range
and where do I get the z coordinate ?

Someone understood what I’m trying to do ?

regards,
lobbel

As I understand what you want to do is to find out whether or not the bounding box is being clipped by the near clip plane.
Is that correct?
It should be fairly easy to do: just transform the vertices of the bounding box using the modelview matrix and then check if any of them has z coordinate > -nearClipPlane. That means they are located closer to the viewer than the near clip plane.
Hope that helps.

As far as I know, I can transform with the inversed transposed modelviewmatrix to object space, right ? but
then I need to clamp the values to the bounding box range
and where do I get the z coordinate ?

If I’m not mistaken matrix to be used to transform points from the camera space back to the object space is the inverse of the modelview.
The inverse transpose of the modelview is used to transform normal vectors such that they remain normal after the transformation (covariant vectors).

Hello, thanks for your answer. I don’t want to check wether the
box is out or inside the volume.
I dont know if you are familiar with volume rendering especially
GPU raycasting. The basic algorithm is to set up a box that is
color encoded. Each vertex gets a color assigned acoording to its
coordinates. So the vertex with position 0,0,0 gets the color
0,0,0 and the vertex with 1,1,1 gets the color 1,1,1 and so on.

This colored cube is drawn once with backface culling and once with frontface culling. So the front and back colored sides are
the entrypoints.
However, when I want to move inside the volume there is one
problem. On that area where the part of the cube is clipped away
at the near plane there will be also no color. That means I have to fix this hole.

As the near plane is always right in front of me I want to project this near plane to fix the hole in the cube. This way
is described in a work about volume rendering, but rarely described.

regards,
lobbel

I understand the basic concepts behind the volume ray casting (as described here for instance: http://en.wikipedia.org/wiki/Volume_rendering#Volume_ray_casting)- but I haven’t done it.
To better understand your problem can you post a screenshot of the rendering where the near clip plane chops off the cube?

Anyway, if what you need to know is the coordinates of the near clipping plane in the model coordinate system of the bounding box you can do the following:

1. It is known that the coordinates of the near clipping plane’s points in the homogenous clip space are these (points are in the counterclockwise order): {-1;-1;-1}, {1;-1;-1}, {1;1;-1}, {-1;1;-1}.
The following transformation transforms each of these points to the world coordinate space (actually to the model space - but I assume that the model transform is identity):
P’ = (Inv(MV) * (Inv(MP) * P)),
where Inv(MV) is the inverse of the modelview and Inv(MP) is the inverse of the projection matrix.
2. Next you need to calculate the matrix that will transform the points to the coordinate system of the bounding box (MB) which can be easily constructed by choosing axis vectors that are parallel to the edges of the box.

So the final transform is:
P’ = (MB * (Inv(MV) * (Inv(MP) * P))).

Hope this is what you need.

Hello, thanks for the help again.
You suggested to use {-1;-1;-1}, {1;-1;-1}, {1;1;-1}, {-1;1;-1} as
the viewports coordinates. My question is, why not to use the viewport coordinates I get with glIntegerv(GL_VIEWPORT,…) ?
Or will the result be the same ? Using the coordinates from glIntegerv(GL_VIEWPORT,…) will cause a clamp of the viewport coordinates to the range [0,1], right ?

Or by the way, here is some text from a paper.

As mentioned above, the first task is to calculate the starting positions at the near clipping plane. To do this, all four bounding vectors of the current viewport have to be transformed to the volume coordinate system and assigned their respective colors. Then a simple quad is drawn with these four vertices, and since OpenGL interpolates the color values, a starting position for each pixel is automatically generated.

So there is written to use the 4 bounding vectors of the viewport. That’s why I supposed to use the 4 values from
glIntegerv(GL_VIEWPORT,…). But they only have 2 components, x and y and z is missing. What value is z ? Furthermore, it’s written to draw a simplie quad. I guess thats no full screen quad. It’s the quad with the transformed vectors the fits the hole.

Do you have any ideas how to obtain the z coordinate for the 4 viewport vectors ?

best regards,
lobbel

You suggested to use {-1;-1;-1}, {1;-1;-1}, {1;1;-1}, {-1;1;-1} as
the viewports coordinates. My question is, why not to use the viewport coordinates I get with glIntegerv(GL_VIEWPORT,…) ?
Or will the result be the same ? Using the coordinates from glIntegerv(GL_VIEWPORT,…) will cause a clamp of the viewport coordinates to the range [0,1], right ?

No. Coordinates you get with glGetIntegerv(GL_VIEWPORT,…) are the window space coordinates of the coordinates I mentioned.
They are not clamped to [0:1] range. They are what you have specified previously when you called glViewport(…).

Or by the way, here is some text from a paper.

[quote]As mentioned above, the first task is to calculate the starting positions at the near clipping plane. To do this, all four bounding vectors of the current viewport have to be transformed to the volume coordinate system and assigned their respective colors. Then a simple quad is drawn with these four vertices, and since OpenGL interpolates the color values, a starting position for each pixel is automatically generated.
[/QUOTE]
It is hard for me to say from this excerpt what is “starting position at the near clip plane” but I guess those are probably the vertices resulting from intersection of the view frustum’s bounding vectors with the near clip plane which is exactly what you get after you perform the step 1 of the 2 transformations I described in my previous post. Then you need to transform those vertices to the “volume coordinate system” (which is I guess the bounding box coordinate system) - this is what you get after you perform step 2.

So there is written to use the 4 bounding vectors of the viewport. That’s why I supposed to use the 4 values from
glIntegerv(GL_VIEWPORT,…).

glIntegerv(GL_VIEWPORT,…) does not give you bounding vectors. It gives you current viewport coordinates.

The bottom line: I highly suggest you to read some good books on the important coordinate spaces used by OpenGL and transformations which OpenGL performs on vertices as they go through the rendering pipeline. “Mathematics for 3D Game Programming & Computer Graphics” by Eric Lengyel is one of the best books on the subject. Chapter 0: The Rendering Pipeline gives a high level overview of the rendering pipeline and different coordinate spaces in particular.

Best Regards,
igorgiv

P.S. A few corrections to to my previous post:
For step 1) As the perspective matrix is a 4x4 matrix the full coordinates of the near clipping plane’s points must be complemented with w-coordinate of 1, that is they must be {-1;-1;-1;1}, {1;-1;-1;1}, {1;1;-1;1}, {-1;1;-1;1}
For step 2) I forgot to mention that the matrix, transforming to the bounding box coordinate space must be inverted as well.

Since the bounding box is from 0,0,0 to 1,1,1 I think I can miss
step two. A matrix the represents the vectors parallel to the box edges will result in a identity matrix, i think. And the inverse of it should be the identity matrix itself.

So I tried to transform the first point you gave me (-1,-1,-1,1) by the inverse of the modelview matrix and the inverse of the projection matrix. A vector component got a value below -1.4 .
But thats too much, they have to be in the range of 0,0,0 to 1,1,1 because I need these values for RGB color as well.

I did something like this…

``````
float MV = {0};
float PJ = {0};
glGetFloatv(GL_MODELVIEW_MATRIX, MV);
glGetFloatv(GL_PROJECTION_MATRIX, PJ);
CMatrix4T<float> ModelView(MV);
CMatrix4T<float> Projection(MV);
CVector4T<float> v1(-1.0f,-1.0f,-1.0f,1.0f);

v1 = (ModelView.Inverse() * Projection.Inverse()) * v1;

``````

Just a few minutes ago I had a new idea to solve my problem. I read about view frustum culling and extraction of the frustum plane vectors. So I thought I can extract the near plane vector
and make a intersection test with my cube and the near plane.
That will result in 3 up to 6 intersection points. Whenever there are at least 3 intersections with the extracted near plane
that means that there is a hole in the cube. With these intersection points I can create a polygon that should fit the
hole.

But I havnt tried this out yet. I don’t know if I get intersection points in the range of 0,0,0 to 1,1,1.
Although, this idea looks similar to the algorithm of voluem rendering by 3d texture slicing. There are planes that are perpendicular to the viewing vector. Now in that case there is
only on plane perpendicular to the viewing vector. thats the extracted near plane.

What do you say to this ?

best regards,
lobbel

Since the bounding box is from 0,0,0 to 1,1,1 I think I can miss step two. A matrix the represents the vectors parallel to the box edges will result in a identity matrix, i think. And the inverse of it should be the identity matrix itself.

Okay.

So I tried to transform the first point you gave me (-1,-1,-1,1) by the inverse of the modelview matrix and the inverse of the projection matrix. A vector component got a value below -1.4.

Positions of the near clipping plane points can be anywhere in the world space - as it depends on the position of the camera(viewer) - so it is not constrained to [0:1] range. Though the intersection points of the box and the near clip plane ARE constrained to the (0;0;0)-(1;1;1) range.

Just a few minutes ago I had a new idea to solve my problem. I read about view frustum culling and extraction of the frustum plane vectors. So I thought I can extract the near plane vector
and make a intersection test with my cube and the near plane.
That will result in 3 up to 6 intersection points. Whenever there are at least 3 intersections with the extracted near plane
that means that there is a hole in the cube. With these intersection points I can create a polygon that should fit the hole.

This is what I suspected from the beginning you needed to do, just wan’t sure… Yes, you can do that. Before you go anywhere further with this: have you tried to enable depth clamping?
Check this out: http://nvidia.fullviewmedia.com/GPU2009/0930-gold-1407.html
(go to approximately 31-st minute of the presentation).

Hello,
thats a nice video. The depth clamping looks awesome. I wonder
if this also works with raycasting because I need correct color
values.

You may have a look at here

This is a color coded cube. The colors represent the entrypoints.
When the cube hit’s the nearplane the same thing happens you have seen in the video. But here it’s not the part of the alien head that disappears, it’s part of the cube. And that means, no entrypoints. If depth clamping will fix this with correct color
values thant this will be great.

Though the intersection points of the box and the near clip plane ARE constrained to the (0;0;0)-(1;1;1) range.

Sounds I’m stupid, but are you talking about the extracted near plane vector ? At page 128 of the book you adviced me there is a table with frustum plane vectors and distances.

best regards
lobbel

This is a color coded cube. The colors represent the entrypoints.
When the cube hit’s the nearplane the same thing happens you have seen in the video. But here it’s not the part of the alien head that disappears, it’s part of the cube. And that means, no entrypoints. If depth clamping will fix this with correct color
values thant this will be great.

That’s what I had in mind.
Okay, let me know… …are you talking about the extracted near plane vector ?

No, what I’m talking about is the intersection points of the near clipping plane and the cube edges. They are constrained to (0;0;0)-(1;1;1) range because you said that your cube’s vertices all have coords within (0;0;0)-(1;1;1) range.

At page 128 of the book you adviced me there is a table with frustum plane vectors and distances.

Just don’t forget that coordinates of those plane vectors in table 4.2 are for the homogenous clip space and you need to perform the transformation in 4.58 to get the camera space plane vectors…

good luck,
igorgiv

Hello,
I’hope that I’m not driving you crazy…

Remember the previous post where quoted a part of a paper. Now it’s clear to me what he actually did. But the problem is the way
how to do that. Because it’s a thing people usually don’t do everyday with opengl.

The problem is still the same. When the near plane cuts color coded cube then there is a hole. To fix this, the guy who wrote the paper did the following. With respect to the current set viewport and field of view he calculated the near plane vectors.
Here is in example but I dont know if this is correct.
For instance I set the projection matrix (gluPerspective) and then set aspect-ratio to 1.0 and the field of view to 90 degrees. And the distance to the near-plane is set to 1.0.

The frustum should now have the shape of a cube. Therefor each step in z direction means a equally step in x and y direction.
Is this correct ? And for this case the near - plane vectors
are {-1,-1,-1}, {1,-1,-1}, {1,1,-1}, {-1,1,-1}. Looks like the
same in one of your older posts.

Then I can multiply these vectors with the inverse modelview matrix and I get the vectors in object space. And then a fullscreen quad is draw with color encoded vectors. The colors are the transformed near plane vectors. So opengl will interpolate the colors for me automatically.

So far sounds good. Are there algorithm or forumlas to calculate these vectors for any given frustum shape respectively any field of fiews? With a different field of view or a different distance from near plane to camera position a step in z direction means
no longer equally steps in x and y direction.

best regards,
lobbel

Hi,

The problem is still the same. When the near plane cuts color coded cube then there is a hole. To fix this, the guy who wrote the paper did the following. With respect to the current set viewport and field of view he calculated the near plane vectors.
Here is in example but I dont know if this is correct.
For instance I set the projection matrix (gluPerspective) and then set aspect-ratio to 1.0 and the field of view to 90 degrees. And the distance to the near-plane is set to 1.0.

The frustum should now have the shape of a cube. Therefor each step in z direction means a equally step in x and y direction.
Is this correct ?

No, for perspective projection the view frustum never takes shape of a cube.
It is the projection matrix that transforms the view frustum to the cube representing homogenous clip space.

And for this case the near - plane vectors
are {-1,-1,-1}, {1,-1,-1}, {1,1,-1}, {-1,1,-1}. Looks like the
same in one of your older posts.

You might setup the camera in such a way that in camera space the vertices would have those coordinates but that’s not what I meant. Those coordinates are for the homogenous clip space, and they are always {-1,-1,-1}, {1,-1,-1}, {1,1,-1}, {-1,1,-1} - regardless of how you setup the perspective matrix.

Anyway, after thinking about it I got curios and decided to create a little demo for you to get rid of all this confusion.
I then realized that extracting near clipping plane is rather a trivial task (for any camera setup) and does not involve those homogenous clip space vertices (though one could extract the plane from those vertices too).
The demo is available here:
http://www.givgraphics.com/ColorCodedCubeApp/colorcodedcubeapp.exe
(It was built with VS 2005 SP1).

The basic steps the algorithm goes through are these:

1. Extract the camera space near clipping plane (trivial, does not depend on camera setup).
2. Transform that plane to world space using the transpose of the modelview matrix.
3. Calculate the intersection with the colored cube.
4. Rearrange the vertices so that they form a convex polygon in counterclockwise order.
5. Render the intersection to cover the hole.

The source code is available here:
http://www.givgraphics.com/ColorCodedCubeApp/src.zip
A few screenshots you can find below.
Unfortunately (for you), the application is using my own framework (as it was much easier for me to do it like this) and I can not give you all the source code files (I only included relevant files). So you won’t be able to compile it. But, hopefully, it will make it clear how the algorithm works and will help you to write your own implementation.

I commented all the relevant parts in the code so hopefully it should not be difficult for you to understand what’s going on.

Best regards,
igorgiv

P.S. You can control the camera using a/s/d/w/r/f key and mouse.
Also you can use these keys to switch between solid/wireframe rendering (e), move near clipping plane (n/shift n), enable/disable depth clamping ©. Spacebar to restore initial camera position.    Hello igorgiv,
thank you very much for your help. things are clearer now. With your sample code and some pages i read about the opengl basics (frustum frustum culling), i now how to do that. i also now how to do it the other way mentioned in the paper. good to know there
is more than one way to solve this problem.

so, thank you again very much for your help and sorry for the

best regards,
lobbel

Hello lobbel,

That is all right - I just felt that I needed to go a bit further - to show you some working example (and check it for myself too ) and make it clearer.

So I’m glad that it helped.
Good luck with the other way.

Best Regards,
igorgiv

Hello igorgiv,
in your example you did intersection calculation with the neaplane and the color coded bounding box. Some months ago, I told you that
I also want to try a different method. But it’s not working the way
it should.

So here is what I did. Let’s assume the resolution is
640 * 480 and the resulting aspect_ratio is 4:3.

• set frustum with
``````
float fovY = 90.0f
gluPerspective(fovY, aspect_reatio, 1.0f, 100.0f);

``````
• extract near plane from eyespace:
In eyespace the camera position is at 0,0,0. So to get the nearplane vectors I first calc. the half nearplane height
``````
float fovy_half = deg_to_rad * fovY * 0.5f;
float near_height = tan(fovy_half) * ndist;
float near_width = near_height * (aspect_reatio);

Vector3 nbl(-near_width, -near_height, -1.0);
Vector3 nbr( near_width, -near_height, -1.0);
Vector3 ntr( near_width,  near_height, -1.0);
Vector3 ntl(-near_width,  near_height, -1.0);

``````

ndist - nearplane distance, as set in gluPerspective
nbl - near buttom left
nbr - near bottom right
ntr - near top right
ntl - near top left

So far so good. These are the vectors of the nearplane in clipspace. To get the objectspace coordinates I have to
multiply each vector with the inverse modelview Matrix.

``````
Matrix4<float> mv;
glGetFloatv(GL_MODELVIEW_MATRIX, mv.get());
mv = Matrix4<float>::transpose(Matrix4<float>::inverse(mv));

nbl = mv * nbl;
nbr = mv * nbr;
ntr = mv * ntr;
ntl = mv * ntl;

``````

With these 4 vectors a draw a Fullscreen quad with disabled
depth testing and then draw the colorcoded cube.

``````
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glMatrixMode(GL_PROJECTION);
glPushMatrix();

glColor3d(nbl, nbl, nbl); glVertex3f(-1.0f, -1.0f, -1.0f);
glColor3d(nbr, nbr, nbr); glVertex3f( 1.0f, -1.0f, -1.0f);
glColor3d(ntr, ntr, ntr); glVertex3f( 1.0f,  1.0f, -1.0f);
glColor3d(ntl, ntl, ntl); glVertex3f(-1.0f,  1.0f, -1.0f);
glEnd ();

glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix ();

``````

But the result looks like this

Hope you can find something.

regards,
lobbel