sorry to but in, but a while back i was doing some work relating to very large ranges which presented difficult precision and depth buffer issues. i plan to pick that work back up probably pretty soon, so i was hoping maybe with all these people that seem to be knowledgeable in one place, i might be able to get a few ideas.
so basicly i’m rendering the entire planet earth with the latest government elevation data set. i won’t get too far into the details, but i will start with the precision issues first, the describe how they relate to the depth buffer.
so basicly, earth radius is about 6000km if i recall correctly. however i was only safely able to push the radius to half that before precision at the human scale started causing seams and things like the camera jiggling.
basicly the system is a dynamic lod roam type system, with two tiers of tesselation. lets say that chunks of the planet at whatever lod are normalized so to be in an ideal range to avoid precision issues.
before rendering the chunks they are normally transformed into their proper position by a the modelview matrix.
remember that most of the earth is just solid whatever, so all operations would normally take place around 6000km from the origin. this alone can cause precision issues when interaction takes place at the human scale.
for a while this was causing me serious issues, however their seemed to be a slicker aproach. rather than transforming the chunks into world space, i would inverse transform the camera into the chunks local space (remember the chunks are normalized for local computations)
this worked great, everything lined up with no artifacts… however one problem. the depth buffer is misaligned for chunks at different lod. i spent a little while looking for ways to remap the depth buffer, using things like gldepthrange and looking for extensions. gldepthrange, if that is the proper name of the function might work, if i knew more about the how the depth values are computed, but i never could track that down before i lost interest.
the main reason i lost interest was because i found out that hardware matrix multiplications are actually very lossy and if i did the multiplications in software i could get by with a lot less precision issues.
this only worked so far, but i was generally satisfied, being able to explore the elevation data set which was generally around the kilometer scale at aproximately two times the size of a normal person without issues, which is generally good enough to get a good idea of what is going on, especially when the data set as large as it is is still really pretty sparse on its own for a person times two.
so my basic curiosity at this point, is whether or not i could remap those depth buffer issues. it isn’t an issue of depth buffer precision at any particular scale. its hard to explain, but it is like using false depth. you are tricking the card, by taking something that is really very big and far away, and shrinking it and puting it right in the cameras face… to the camera it looks exactly the same size, but to the depth buffer it is actually closer than it really is supposed to be.
what needs to be done, is the range of the depth buffer needs to be remapped so that all of the different size chunks can line up properly in the depth buffer. like i said, this really isn’t a depth buffer issue, its a precision issue… but the depth buffer is a side effect of an interesting solution.
so i dunno, if nothing else, maybe these ideas might help some of you out there. if someone can offer a solution i would be delighted. if not already possible, i figure this could be solved somehow if enough people desired it. like i said, gldepthrange might new the trick if i had understood it better. i noticed some equations around here, so i will probably give them a look when i get a moment.
sincerely,
michael