This is a hard question to word correctly, so please forgive my terminology.

Basically, for purposes of volumetric environmental particle effects – like rain, snow, patchy fog, etc. I’m working on a way to populate *just* the view frustum with particles. The idea being that as these particles fall out of the frustum ( be it for wind, or gravity, etc ) they will be respawned on the mirror side of the frustum to continue. So if a particle leaves the bottom, it will be respawned on the top, etc. This ought to allow for sweeping of the camera and the appearance of continuous volume of particles.

For purposes of mathmatical simplicity, it seems best to me to let the particles exist in a unit cube, which would be projected into world-space to correspond to the view frustum. Collision detection with the edges of the cube would be trivial, and projection to world space ought to be simple enough too.

This is what I’m getting when I project a ( large, for testing ) box using the projection matrix and worldview matrix of a camera ( which the current camera is looking at )

This is what I’m looking to get:

My proof-of-concept code below is drawing a blue sphere where the “rainview” camera is located, with a blue line in the direction that camera is facing. Then it draws a pink box where the camera is, using the concatenation of the inverse projection and modelview matrices. The pink box correctly positions itself and faces the right direction, it just happens to be a box, not frustum.

```
void displayRainViewCamera( void )
{
/*
Only draw if current camera isn't the "rainview" camera
*/
if ( CameraCollection::currentCamera()->name() != CAMERA_RAINVIEW )
{
Camera *rv = CameraCollection::cameraNamed( CAMERA_RAINVIEW );
vec3 pos( rv->position() ), look( rv->looking());
/*
Draw a sphere at its position, and a line going
in the direction the camera's facing
*/
glPushMatrix();
glTranslatef( pos.x, pos.y, pos.z );
glColor3f( 0.75, 0.75, 1 );
glutSolidSphere( 2, 10, 10 );
glBegin( GL_LINES );
glVertex3f( 0,0,0 );
glVertex3fv( (look * 10 ).v );
glEnd();
glPopMatrix();
/*
Make a large cube. I will use a unit cube, but for
now I'm making a cube of size 40, with z=0 at the near plane.
*/
float size = 20;
vec3 cube[8] =
{
vec3( -size, -size, 0 ),
vec3( size, -size, 0 ),
vec3( size, size, 0 ),
vec3( -size, size, 0 ),
vec3( -size, -size, size ),
vec3( size, -size, size ),
vec3( size, size, size ),
vec3( -size, size, size )
};
/*
Get the modelview and projection matrices from
the "rainview" camera ( this is *not* what's being used by the
current camera ). Get their inverses, and concatenate.
NOTE: My cameras use an infinite projection matrix for
stencil shadows. The method projectionWithFarPlane() gives
you a projection matrix with a "fake" far plane, which
I've set to 500 elsewhere.
*/
mat4 projection( rv->projectionWithFarPlane() ),
modelview( rv->modelview() ),
projectionInverse( projection.inverse() ),
modelviewInverse( modelview.inverse() );
mat4 m = modelviewInverse * projectionInverse;
/*
Transform the cube's points
*/
for ( int i = 0; i < 8; i++ )
{
cube[i] = m * cube[i];
}
/*
Draw it as lines
*/
glColor3f( 1, 0.75, 0.75 );
glBegin( GL_LINE_LOOP );
glVertex3fv( cube[0] );
glVertex3fv( cube[1] );
glVertex3fv( cube[2] );
glVertex3fv( cube[3] );
glEnd();
glBegin( GL_LINE_LOOP );
glVertex3fv( cube[4] );
glVertex3fv( cube[5] );
glVertex3fv( cube[6] );
glVertex3fv( cube[7] );
glEnd();
glBegin( GL_LINES );
glVertex3fv( cube[0] );
glVertex3fv( cube[4] );
glVertex3fv( cube[1] );
glVertex3fv( cube[5] );
glVertex3fv( cube[2] );
glVertex3fv( cube[6] );
glVertex3fv( cube[3] );
glVertex3fv( cube[7] );
glEnd();
}
```

The only thing I can think of is that you *can’t* just get a matrix to do this. Instead, I’ve got to project not a box, but a pyramid shape where the front face corresponds to screen resolution and the rear face dimensions are scaled according to (farPlane - nearPlane) by the field of view.

Anybody have any advice for me?

EDIT:

With further thought, I’ve decided to take the approach of creating not a unit cube, but rather a frustum in eye space ( e.g., aligned along x, y and z ), and then projecting that to world space using the inverse of the camera’s projection and modelview matrices. This ought to get around the lack of a 1/w perspective transformation.

It seems to me that it ought to allow for easy random population and collision detection, since it would be just a matter of similar triangles to detect when a particle leaves.

I’m going to write up a test to see if this is viable, but still, I’m curious if there’s a simpler way…