Volumetric fog without tesselation

I’ve come to the conclusion that I’m going to need volumetric fog in conjuction with particles to simulate smoke-filled rooms. (firefighting sim.)

Particles alone eat up way too much fill-rate, and fog alone doesn’t look quite right. Mixing regular glFog and particles actually looks decent, but I’d like it to be volumetric.

I read about how Quake3 implemented it, through extra tesselation, and that doesn’t sound too appealing, as the volumes aren’t going to be static.

I looked at Humus’s VolumetricLightingII demo, and while that seems promising, when you’re inside a light it actually brightens geometry that isn’t in the light’s volume, so that doesn’t seem to be the way to go.

Looking at the NVSDK: http://download.developer.nvidia.com/developer/SDK/Individual_Samples/samples.html
The Fog Polygon Volumes whitepaper looks promising. Has anyone actually implemented this in OpenGL? What’s the performance like?

I have an idea using shadow volumes, where the geometry that’s inside the fog is masked with the stencil buffer just like the traditional shadow volume algorithm, but a series of billboarded planes are also used to shroud the geometry inside the volume. I’m guessing someone’s already done this

There’s probably a better alternative to the billboarded planes, using vertex and fragment programs, but do you think treating the fog volumes as shadow volumes is a decent start?

Your experiences with would be greatly welcomed. :slight_smile:

projected textures perhaps?
its what i done(did) here
http://uk.geocities.com/sloppyturds/volfog2.jpg

Well, you can do volumetric fog per-pixel with something like this.

What you do is render the volume to a texture, but you’re not rendering the color of the volume. You render the depth only (you’ll need to write the depth to a color texture, like a 16 or 32-bit float luminance texture). Now, you do this with backface culling on, so you only get the depths of the front faces.

Next, render the volume geometry’s depth into a second texture, but this time with the backfaces reversed. So, effectively, you have two textures, one of which has the depth at which the fog begins, and the other has the depth at which the fog ends.

Note: to save texture binds, you could store these in a 2-component float texture.

Now, render your regular geometry, binding both textures. In your fragment program, you need to determine whether the fragment is in the fog, in front of the fog, or beind it. This is easily done. If it’s behind the fog, then it can trivally apply a bias to its color. If it’s in front, you do nothing. If it is inside, you need to determine the distance from the front of the fog to the fragment (note that this distance is in post-projection eye space, so it’s not linear. You may want to linearize it), and apply that to your color.

As long as your fog volume is fully enclosed and convex, you’ll be fine.

It can be modified slightly to work when the viewpoint is inside the fog volume.

Yeah, that’s the method that the NVidia paper describes. Two renders into buffers that will probably be near the framebuffer’s size, + the fragment program sounds fairly expensive, but fortunately I can disable fog when rendering in the thermal imaging camera mode.