first off i’m curious about the static pipeline fog. i’m assuming it is not computed when the depth buffer is written to and is based on z depth and not actual distance. i really don’t understand the mechanism. i’m assuming maybe when the buffer is swapped the depth buffer is used to compute fog values and a depth value set to clear is passed for fogging.
in this case i have an application which utilizes multiple z slicing planes to overcome lack of precision in the z buffer. i’m afraid that synchronizing fogging across slices might not be possible. i think there is is linear fog, and a fog offset, but other than that i’m not sure what to expect.
so assuming worst case, i’m considering doing per pixel distance based fog. i know there is a fog parameter passable either as a vertex attribute or between shaders. i don’t believe there is any such thing as a fog buffer. what sort of factors should i take into consideratin. i figure in the end it just amounts to modulating the outgoing pixel colour. so i figure things like fogging an alpha blended pixel would produce double fogging.
so as its probably clear by now i’m not really sure what i’m getting into in considering this.
but further more for a recent project i would like to consider a special kind of fog described below which i would like to do entirely in the fragment shader if possible.
i’m basicly right now working on a fictional cylindrical world. it is divided up in 12 regions like a clock. every other region is a night and day zone. so if you can imagine a sort of colour disk oscilating between light yellow and dark blue with green blending in between.
now imagine placing two points on the disk. point (a) the camera and (b) the world space pixel. the idea is two calculate the final blended colour of the resulting line segment as viewed through the screen space pixel as accurately and effeciently as possible.
i have some general ideas but i was hoping i might be able to ween some creative suggestions and hopefully expand my horizons.