Is it possible to implement slope scale depth bias in OpenGL 3.2 core profile and will it help improve the quality of shadow ?
The thing is I tried to implement shadow map (cascaded) with standard depthmap (GL_DEPTHCOMPONENT32F) as a fall back method when the target machine is not fast enought to use VSM (render 3 split VSM + ping pong blur totally kill the performance).
Since my mesh had many unclosed surface (like leaf,skirt,hair,etc) it not possible to use the render back-face only trick to avoid z fighting.
Also applying too much uniform bias when doing depth comparision will lead to shadow detaching from the ground which I would like to avoid.
Quote from “Common Techniques to Improve Shadow Depth Maps” on MSDN website
Slope-Scale Depth Bias
As previously mentioned, self-shadowing can lead to shadow acne. Adding too much bias can result in Peter Panning. Additionally, polygons with steep slopes (relative to the light) suffer more from projective aliasing than polygons with shallow slopes (relative to the light). Because of this, each depth map value may need a different offset depending on the polygon’s slope relative to the light.
Direct3D 10 hardware has the ability to bias a polygon based on its slope with respect to the view direction. This has the effect of applying a large bias to a polygon that is viewed edge-on to the light direction, but not applying any bias to a polygon facing the light directly. Figure 10 illustrates how two neighboring pixels can alternate between shadowed and unshadowed when testing against the same unbiased slope.
If you wanted this surely you could do it manually in your pixel shader when applying the shadow map? I’ve been considering making my own squint on the technique using the light distance rather than a depth buffer, should be more accurate and linear.
I posted a response to this thread yesterday, is it showing up on your computer? This feature is not Direct3D 10 only, it has been in OpenGL in every version from 1.0 to 4.1, it just has a different name Polygon Offset vs. Slope Scaled Bias.
Be aware that polygon offset sucks because of a number of factors:[ul]The spec allows it to be implementation-dependent (clicky) so the same values may give different results on different hardware.You may encounter floating point precision problems when moving between 16-bit and 24-bit depth buffers.[*]The depth buffer is non-linear so the same values will give different results at different depths.[/ul]If you can find values that work well for you, then great, but it’s not a general solution and you shouldn’t expect it to be one.
Description, and recommendation to start with 1.1, 4.0. Tweak to taste. There’s also a projection matrix trick that offsets objects ~1 Zbuffer depth unit forward, which gets around PolygonOffset’s Achille’s Heel (if it even bites you). Pointers: