Multiple Z Buffer Formulas

Originally posted by Shinta:
Right, I(not quite by myself) have modeled whole cities like Berlin, Hamburg and Cologne at 5-10 cm precision, really. I work for a company doing geoinformatical stuff (3d-GIS, arealphoto-scanning, orthophoto-generation, …, whole-city-realtime-visualisation).
If ever acquire data every 5 centimeter, there will be many errors even with high-precision cameras/radars/etc. Anyway, you’re talking about losing precision for objects rendered at 1km away or so. For the size they will appear on screen, there is very little chance the user will notice it. And if he really notices it, that means he’s looking for far objects, not near objects, so you can push the near clipping plane to win in the near/far ratio.

And if you really need ultra high precision at both near and far planes, then either use the depth replace technique (slow indeed) or buy that kind of SGI workstation where you have a 1024-bit framebuffer and where you can allocate bit depths to any buffer you like (for instance you could get three 256-bit RGBA buffers (64-bit for each channel, with triple-buffering) and a 256-bit depth buffer… enough precision ?).

If ever acquire data every 5 centimeter, there will be many errors even with high-precision cameras/radars/etc.
For the absolute errors you’re right, but not for the relative errors (e.g. if I measure the positions of some windows on a wall, the error in the distance between the windows is very small).

Anyway, you’re talking about losing precision for objects rendered at 1km away or so. For the size they will appear on screen, there is very little chance the user will notice it.
Oh, the size is not the problem. It’s the flicker of faces that are very close (and don’t have the same color or texture). Those faces will be drawn in the same z-buffer-value if the z-buffer-precision is not high enough. Take my calcs above. If I set near=3 and far=3000 then faces near far-clip-plane need at least 17 cm z-distance to be not drawn into the same z-buffer-value. If they aren’t, the face drawn first will be visible, even if it is behind some other face. (And plz. don’t advise me about back-face-culling, I’m doing this already :wink:

Shinta

Try to push near plane as far as you can. I mean, if you have bird-view to big city, you can push near plane to 100.0. Actually… Try to change near and far plane every frame depending on camera position and scene complecity.

yooyo

Originally posted by yooyo:
Try to push near plane as far as you can. I mean, if you have bird-view to big city, you can push near plane to 100.0. Actually… Try to change near and far plane every frame depending on camera position and scene complecity.
Thanks yooyo.

But I already do this, too. ;_; I let near value slide between a user-set-minimum-near and 50 m, but there are still some situation, where this don’t work. The software I wrote, provides a so called “walk-mode”, where the viewer moves on the DTM (digital terrestrial model) at ~1.8m above the ground. In this mode, the calculated near value, is always very close to the viewer (because of the DTM)and the algorithm is not a good help (to speed up, the algorithm is disabled in walk-mode).

Shinta

In “walk mode”, the user will very unlikely see objects 3km far away from him. So, pull the far clipping plane in that case.

Just a comment in general, with regards to the original post. It would be nice to have a zbuffer formula where we are not excessively punished for setting the near plane less than 1.0.

Why should we not be able to set the near plane to 0.0? Almost all games I’ve played, have had problems where the near clip plane sometime intersects world geometry or other characters. The 3D code has to jump through hoops and do tricks so that the camera’s near plane doesn’t accidentally cut off geometry, but in the end, there’s still a few situations where this still happens.

I want to be able to just set the clipping plane to 0.0 and not worry about the near clip plane.

Many optimization take place when the depth is computed as 1/Z, so changing this assumption is simply going to kill the framerate.

To to be able to set near clip to 0, they’d just have to change it to something like 1/(1+z) …

Seems so easy on the paper :slight_smile:
But in fact it breaks lots of optimizations. 1/z and 1/(1+z) are very different when it comes to interpolate the Z when rasterizing a polygon for instance.

Just reading the “clip at zero” problem, I just had an idea (someone tell me if this is insane - I have not tried it)

Multiplying by the projection matrix just gets the vertices into the range -1, 1 in x and y and applying perspective along z. So if you want the clip plane at z=0, can’t you just translate a little along z after perspective has been applied to prevent clipping of near geometry? (ie probably by nearDist/(farDist - nearDist) amount along z)

Like I said I have not really though through this so feel free to shoot me down.

This way you’re just emulating the camera is a bit further, but in the end you still get clipped. Moreover you will mess up lighting effects (specular) when light model local viewer is set to true. Not to mention about fog, etc.

Nice try though :wink:

sorry to but in, but a while back i was doing some work relating to very large ranges which presented difficult precision and depth buffer issues. i plan to pick that work back up probably pretty soon, so i was hoping maybe with all these people that seem to be knowledgeable in one place, i might be able to get a few ideas.

so basicly i’m rendering the entire planet earth with the latest government elevation data set. i won’t get too far into the details, but i will start with the precision issues first, the describe how they relate to the depth buffer.

so basicly, earth radius is about 6000km if i recall correctly. however i was only safely able to push the radius to half that before precision at the human scale started causing seams and things like the camera jiggling.

basicly the system is a dynamic lod roam type system, with two tiers of tesselation. lets say that chunks of the planet at whatever lod are normalized so to be in an ideal range to avoid precision issues.

before rendering the chunks they are normally transformed into their proper position by a the modelview matrix.

remember that most of the earth is just solid whatever, so all operations would normally take place around 6000km from the origin. this alone can cause precision issues when interaction takes place at the human scale.

for a while this was causing me serious issues, however their seemed to be a slicker aproach. rather than transforming the chunks into world space, i would inverse transform the camera into the chunks local space (remember the chunks are normalized for local computations)

this worked great, everything lined up with no artifacts… however one problem. the depth buffer is misaligned for chunks at different lod. i spent a little while looking for ways to remap the depth buffer, using things like gldepthrange and looking for extensions. gldepthrange, if that is the proper name of the function might work, if i knew more about the how the depth values are computed, but i never could track that down before i lost interest.

the main reason i lost interest was because i found out that hardware matrix multiplications are actually very lossy and if i did the multiplications in software i could get by with a lot less precision issues.

this only worked so far, but i was generally satisfied, being able to explore the elevation data set which was generally around the kilometer scale at aproximately two times the size of a normal person without issues, which is generally good enough to get a good idea of what is going on, especially when the data set as large as it is is still really pretty sparse on its own for a person times two.

so my basic curiosity at this point, is whether or not i could remap those depth buffer issues. it isn’t an issue of depth buffer precision at any particular scale. its hard to explain, but it is like using false depth. you are tricking the card, by taking something that is really very big and far away, and shrinking it and puting it right in the cameras face… to the camera it looks exactly the same size, but to the depth buffer it is actually closer than it really is supposed to be.

what needs to be done, is the range of the depth buffer needs to be remapped so that all of the different size chunks can line up properly in the depth buffer. like i said, this really isn’t a depth buffer issue, its a precision issue… but the depth buffer is a side effect of an interesting solution.

so i dunno, if nothing else, maybe these ideas might help some of you out there. if someone can offer a solution i would be delighted. if not already possible, i figure this could be solved somehow if enough people desired it. like i said, gldepthrange might new the trick if i had understood it better. i noticed some equations around here, so i will probably give them a look when i get a moment.

sincerely,

michael

You’re talking about another problem : precision issues when visualizing data from arbitrary position over a large world, typically the whole Earth.

This topic was rather into depth buffer precision itself. Camera and object are supposed to jitter very few at this point.

Originally posted by vincoof:
[b]You’re talking about another problem : precision issues when visualizing data from arbitrary position over a large world, typically the whole Earth.

This topic was rather into depth buffer precision itself. Camera and object are supposed to jitter very few at this point.[/b]
like i originally stated i realize so much. i was just hoping someone experienced here would have a quick solution.

maybe i should’ve started another topic somewhere else. maybe i will and stick a link to it in here.

still if i may ask. is it possible to use gldepthrange, assuming you know the correct parameters, or something else to solve this issue? or would there be some sort of issue which would cause the depth to be non-linear or lossy, meaning intersections would still be wrong? and if nothing is possible, would it be worth while to request that opengl try to alleviate the matter in the future?

any other suggestions for rendering scenes on the scale of the entire earth with seamless scale transitions.

sincerely,

michael

There are mainly three solutions (well, almost) :
1- set a constraint to your near/far clipping plane. If the whole Earth is in your viewing frustum chances are you’re looking from thousands of kilometers far away from the planet : in that case simply push the near plane very far,
2- slice your viewing frustum into frustums that have reasonable near/far ratio. This works quite good but overkills performance.
3- use a fragment program to re-map depth computation to a scheme that matches best your visualization window. this also overkills performance and needs good hardware, but requires less passes than solution #2.

The glDepthRange function can not give you more depth buffer precision. In fact it can only decrease it.

sorry for the abscence, i tend to forget that i’m posting in this forum.

as for your helpful suggestions though. i apreciate the effort, but i’m not sure this discussion, or my part, is going anywhere.

as you said before i believe, i don’t have any depth precision issues at the moment. and yes, using gldepthrange will reduce precision, as is my goal. whether or not the reduction would be worth while in the end i cannot say.

i’m aware of your suggestions for rendering things like solar systems, but that isn’t my goal, and for my application they are not really applicable. for what its worth though, i do adjust the near and far planes for optimal precision. and i have no need at the moment for partitioning rendering due to depth issues.

if there is any point to my words here, this is my problem in a nutshell. you can find it in the previous posts, but here it is again in all of its nakedness.

the problem is one of false depth. say you have a giant lit sphere very far away. now imagine replacing it with an identical but much smaller sphere which is adjusted along the view vector (along with the light) so that it appears to be identical to the large sphere. now it looks identical, but the depth buffer still gives the illusion away.

so what opengl operations can be applied to the depth computation so that the illusion can be complete?

this is my desire as blunt as i can make it.

if you know the scale of the vector by which the smaller sphere is positioned, then would it not be possible to remap the depth computations with gldepthrange or some other techniqe?

sincerely,

michael

so what opengl operations can be applied to the depth computation so that the illusion can be complete?
Simple: draw the imposter without depth writes.

That way, everything you draw will appear on top of it. Presumably, you draw imposters at infinite distance first, so there’s no real problem with the change of depth.

[b]

Originally posted by Korval:
That way, everything you draw will appear on top of it. Presumably, you draw imposters at infinite distance first, so there’s no real problem with the change of depth.[/b]
Yup, except if imposters can overlap on screen, that’s the best thing to do. And even if they overlap 2D-wise, if they don’t intersect depth-wise you simply have to sort imposters by depth and render them in a back-to-front order. The only problem comes if they overlap 3D-wise, in which case there is still a solution : merge imposters (that is, instead of creating two imposters for two stars, make a single imposter for both).