Creating a 64-bit depth buffer

This is a little different twist on a question I asked a few months ago. I am looking to see if there is a way to maually create a 64-bit depth buffer; I don’t care if it slows down performance. Here’s why:

I must draw several objects, some very close (i.e. 0.1 units away), and some VERY far away (i.e. 1.0E8 units away). The objects drawn very far away were having z-fighting problems.

So, I tried scaling, but the rendering was not totally photorealistic. Then I tried a multipass render, drawing back to front. This ALMOST worked, but I couldn’t draw the entire scene like this because there were one or two objects that went from a near 0 distance from the camera all the way to units well above 1.0E8 units away (the orbital path of a space object). If I drew this object only with my far field depth buffer, it gets clipped before it gets to the required near field. THis last object is supposed to pass through the near field object, but be hidden by the large, far field objects when it goers behind. Instead, becuase I drew the far field objects using a different depth buffer, when I draw this orbital path, it appears in front of the far field object when it is supposed to be hidden behind it.

What I really need is a deeper depth buffer (i.e. 64-bit) so I can render these extremely large distances in one pass, and still have objects hidden appropriately in the far field and display correctly nearby.

Any ideas? Is it even possible in OpenGL to create custom depth buffers? I’d really like to let OpenGL do the hidden line removal rather than try to do it myself manually.

Thanks.

If something is 1 pixel big at distance 1E8 meters, how big is it then at distance 1 meters?

Originally posted by jwatte:
If something is 1 pixel big at distance 1E8 meters, how big is it then at distance 1 meters?

BIG! (like 6378137 units) (Obviously the exact answer this depends on the angle you set your viewing frustum to, but I don’t know the formulas for perspective projection well enough to answer this off the top of my head.)

The issue is this – I can shrink the size of the object being drawn, and reduce the distance for depth buffering, but when I change the angle of my camera with respect to the near and far objects, the relative image is off slightly. Doing this effectively changes the size of the object. For instance, If I shrink the size of the Earth by 1000 and shrink the distance of the earth by a thousand, it is as if my 1 unit object, instead of being 1 m in size, is now a km in size. This generally produces acceptable results, but there are some slight differences in the relative position of the Earth and 1 m object when viewed from an oblique angle. The bigger issue is when I want to draw another object in the near field less than 1 km away – the scaling, in this case causes the two objects to overlap, where, in fact they should be completely separated. On top of that, the physics involved in modeling the orbital motion requires the actual numbers, not scaled numbers for relative distances, becuase orbital velocity is a function of the radius.

These two issues are why scaling was not really an acceptable solution; it is acceptable in many cases, but not all the time as I describe above.

In any event, I think I solved the problem with my multipass render, but I’m taking a performance hit. I draw the entire scene in the far field, and let the near field be clipped. Then I re-render the entire scene in the near field with the far field clipped. This slows down the rendering by a factor of 2, but gets rid of my z-fighting problem, and everything is drawn according to physical reality.

[This message has been edited by Namwob (edited 04-26-2003).]