Unexpected fog behavior

I’m trying to implement fog and it appears as though FOG_START and FOG_END values are being ignored by Fogf. It also appears as though the resulting fog begins at zero of the modelview matrix’s Z axis rather than the “camera” as set by Glu.LookAt. As objects move farther away from zero on that axis they are more heavily colored by the fog. Different camera coordinates have no effect.

Here’s what is hopefully the relevant code:

' Enable fog and set mode and parameters.

' Set projection mode.
Glu.Perspective(45, sWidth / sHeight, Camera.Near, Camera.Far)
Gl.Scalef(1, -1, 1)

' Update OpenGL matrix.
UpVec1 = Sin(Rad(- Camera.Orientation))
UpVec2 = Cos(Rad(- Camera.Orientation))
UpVec3 = 1
Glu.LookAt(Camera.WorldX, Camera.WorldY, Camera.WorldZ, Camera.WorldX, Camera.WorldY, -32768, UpVec1, UpVec2, UpVec3)

Fog(Working, Working, Working, Ignored, Camera.Near, Camera.Near + 10

Does this problem sound familiar or is there anything in my code that hints at the problem?

You need to switch back to GL_MODELVIEW matrix mode before calling setting the camera with Glu.LookAt(…):
Help stamp out GL_PROJECTION abuse.

I did that and the difference seems to make no sense. This is the order of operations of what I think is the relevant code:

  ' Set projection mode.
  Glu.Perspective(45, sWidth / sHeight, Camera.Near, Camera.Far)
  Gl.Scalef(1, -1, 1 * ElevationScale)

  Glu.LookAt(Camera.WorldX, Camera.WorldY, Camera.WorldZ, Camera.WorldX, Camera.WorldY, -32768, UpVec1, UpVec2, UpVec3)

  ' Set fog parameters to atmospheric values.
  Fog(SolarColor[0], SolarColor[1], SolarColor[2], IgnoredAlpha, IgnoredStart, IgnoredEnd)
  ' Draw tile grid landscape.
  ' Draw objects in render queue.
  ' Draw water.

  ' Set projection mode to 2D.

  ' Draw HUD/GUI.

The “Fog” procedure looks like this:

Public Sub Fog(R As Single, G As Single, B As Single, A As Single, FogStart As Single, FogEnd As Single)

  ' Enable fog and set specified parameters.

  ' General declarations.
  Dim FogColor As New Single[4]

  ' Set fog color.
  FogColor[0] = R
  FogColor[1] = G
  FogColor[2] = B
  FogColor[3] = A

  ' Set fog properties.
  Gl.Fogfv(Gl.FOG_COLOR, FogColor)
  Gl.Fogf(Gl.FOG_START, FogStart)
  Gl.Fogf(Gl.FOG_END, FogEnd)


The fog color is set correctly, but everything is 100% fog color. Through no concrete feedback from trial and error I’m guessing it has something to do with (a) enabling and disabling depth testing while rendering various things per frame or (b) scaling the perspective project matrix to reverse the Y axis and compress the Z axis. Is there some order of operations with respect to fog and enabling/disabling depth testing? Could the perspective projection matrix scale transformation somehow affect what would seem like sane fog parameters? This is driving me crazy!

To follow up, it appears that linear fog is the problem. EXP2 with a precise density parameter seems to work well. Why in the world would linear fog, which has the most intuitive parameters (start, end), behave so badly? I scaled the start and end values a million different ways trying to get results.

Suggest you move that glScale( 1, -1, scale ) out of the PROJECTION matrix and put it in MODELVIEW. That isn’t part of your PROJECTION. In fact, I’d nuke this altogether until you solve your problem – possibly related.

Keep in mind that fog is computed from a fog coordinate computed in EYE-SPACE (which is determined by your MODELVIEW matrix in combination with the OBJECT-SPACE vertex positions you provide). Also, if you want the most intuitive results, switch the fog range mode computation to RADIAL.

(Update:) Looking back at old GL 2.1, doesn’t look like the fog range mode ever made it to core. It’s an NVidia extension. In case you have an NVidia card, you can do one of:


I’m trying to make this driver independent, so NVIDIA can suck it with their extensions if they’re not supported by other OpenGL-compliant cards. Good to know, though, thanks. :slight_smile:

I applied the scale transformation to the modelview matrix rather than the projection matrix and interestingly two things happened. First the fog still didn’t behave, and second it whacked out my directional light simulating the sun. I bounce the light’s x axis between -1 and 1, manipulating its colors to simulate a day/night cycle, and after scaling the modelview instead of the projection matrix it began making midnight as bright as noon (and greatly intensifying the water color), although it appeared to behave everywhere else. I suppose the root of all evil here is how I’m scaling the modelview matrix (as you suggested). It’s probably messing up the fog and the damn directional light. Sigh…

For now I’m going to stop spending days fixing a broken doorbell as the house rots around me, continue to abuse the projection matrix, and move on. I will sort it out and when I do I’ll post the solution here. Thanks everyone for helping. You rock.

Is using shaders completely out of the picture for you, do you absolutely need fixed functions? A linear fog term is very easy to compute per fragment, you know. No fiddling around with the projection matrix necessary.

Nothing’s completely out of the picture, although being a jack of all trades (and master of none) and being solely responsible for the entire project I am limited in how much time I can spend on any one thing (including learning unfortunately). For example I knew absolutely nothing of OpenGL at the start of the project, and now am just getting comfortable with the logic of matrices. To point, I don’t even know what a fragment is. I pretty much have to pick my battles carefully.

Once a few more basic gameplay elements have been implemented and I hit alpha I’ll explore shaders to see what the fuss is all about. It’s my understanding that it’s a scripting language interpreted by the GPU to perform hardware accelerated custom operations for special effects like water, but that’s all I know really. Good tip; I didn’t know a shader could do fog.

“Shader” is a bit of a misnomer. In fact it is a small C-like program with strong typing that will be compiled to machine code running in the GPU by the video driver.
A program can be affected to work on a vertex (take all input for a vertex such as position, color, texture coordinate, fog parameters, alpha, any value really) to output a vertex with different parameters.
A program can be attached to a fragment (almost like a pixel) to take whatever input parameters were interpolated from vertices and other constants to generate a final color and depth (for example implementing a complex material using with several textures with bump mapping and fog).

By design each vertex and each fragment can be computed separately from siblings, which means this is can be processed massively in parallel by the GPU (from dozens to thousands).

It seems intimidating at first but allows in fact much better control and easier implementation of anything non-trivial. For a nice view of what is modern OpenGL, read this :

Awesome. I’m assuming the output of a shader call can “spread” across the many pixels of a quad or other geometry by referencing vertices as you mentioned. Sounds like doing per-pixel software operations but executing in hardware with hardware-interpolated results. If true, that alone probably explains why the 360 and PS3 have okay graphics with crap CPU and RAM. In the queue.

Indeed GPUs take advantage of extreme parallelism thanks to having almost no coupling between tasks, which means they are much more efficient (in raw compute power for transistor count) than CPU.

But per-pixel operations results (fragment shaders) are really per pixel, only the inputs are interpolated between vertices.