I am currently working on rendering volumetric Fog.
Currently, I have it working using multiple passes.
First, everything in scene is rendered to a texture.
Next, I render the Fog’s (Cube) back faces, to store data into a new texture ( u, v, w, screen depth ).
Then I do render the front faces, passing the previous two textures into it.
When you go inside the fog, the final pass does not render the whole cube do to some of the cube being behind the camera.
At the moment, I have a fog effect that simply uses screen depth, because when the data is lacking, I can assume it is 0.0, due to 0.0 being the start of the camera frustrum.
Is there a way, I can tell OpenGL “If the camera intersects this object, assume a new face is there, to fill the intersect, and accuratly interpolate the vertex data across it”.
This way, I can access the UVW as well as depth values and get some pretty sweet looking fog with a perlin noise 3D Texture.