Is there a way to get OpenGL to handle translations (and object sizes) of double precision rather than floating point values. (I have been using glTranslated() but it has not made a difference in this case)
The issue is for enormous translation distances, I am getting bad rendering artifacts in my texture mapping and basic polygon generation. (and I know for a fact it is not clipping at the far plane, because I set that to an astronomical number).
The example is this – I want to draw a 1m satellite in orbit around the earth, approx 42,160,000 m from the center of the Earth. The Earth’s radius is 6,781,137 m. When I center the view on the Earth, I scale the drawing such that the 1 earth radius = a unit of 1 in OpenGL, and everything renders nicely. But when I move the viewpoint to the satellite itself (where 1 OpenGL unit = 1 m), when the Earth renders, it flickers on and off and my texture mapping gets all funky.
The reason I suspect this is a float vs double issue is that if I make the satellite 1 km in size (instead of 1 m), I get none of these rendering artifacts. Since floats only allow about 6-7 significant figures, and 6378137 is 7 significant figures, I’m guessing there are some undefined values in the least significant bits. If this is the case, doubles would solve my problems.
I have tried shrinking the distances and sizes by a factor of 1000 prior to rendering, and it looks ALMOST correct, but there are some subtle flaws in the relative sizing of the objects, that, admittedly, most people won’t notice, but I do.
Do most drivers truncate the numbers sent to glTranslated() to floats before sending them to hardware? And does Microsoft’s software implementation of OpenGL truncate prior to processing?
Any ideas on how to make this “render nicely” would be appreciated…