The polygon offset use the following equation to compute the offset to apply to the z-buffer:
o = mfactor + runits
where factor and units are the parameters to the function glPolygonOffset, m is the maximum depth slope of the polygon and r is the minimum resolvable difference.
r is an implementation constant, but I’ve been browsing the web to find its value with no succes so far. Anybody knows how to find it?
r is the integer 1.
Thus if you have a 16bit depth buffer, r will represent a 1/65536th of the depth buffer (because 2^16=65536).
Cool, that was way to easy, I was toruring my mind with a value based on the near and far clip, but I was having troubles because the precision isn’t constant through the z-buffer.
Thanks for your much easier solution.
But be careful if your view is perspective (not orthogonal).
Imagine your near plane is at 1 and your far plane at 65536.
With an orthogonal projection matrix, r is 1 unit in all the depth range.
But with a perspective projection matrix, perspective division yields to non-uniform r through the depth range. That is, r will represent less than 1 unit when object is close to the viewpoint, and r will represent more than 1 unit when object is far from the viewpoint.
Right - r is one unit in window space z, regardless of the projection matrix.
Whether that interval maps to a constant interval back in eye space depends on the projection matrix.
This also brings up an interesting point about floating point depth buffers (whether they be Z or W buffers). It is impossible to define the r term as a single epsilon, since the epsilon changes with range. This isn’t a really big deal since OpenGL uses fixed point z buffers.