floats/doubles and glTranslated()

Is there a way to get OpenGL to handle translations (and object sizes) of double precision rather than floating point values. (I have been using glTranslated() but it has not made a difference in this case)

The issue is for enormous translation distances, I am getting bad rendering artifacts in my texture mapping and basic polygon generation. (and I know for a fact it is not clipping at the far plane, because I set that to an astronomical number).

The example is this – I want to draw a 1m satellite in orbit around the earth, approx 42,160,000 m from the center of the Earth. The Earth’s radius is 6,781,137 m. When I center the view on the Earth, I scale the drawing such that the 1 earth radius = a unit of 1 in OpenGL, and everything renders nicely. But when I move the viewpoint to the satellite itself (where 1 OpenGL unit = 1 m), when the Earth renders, it flickers on and off and my texture mapping gets all funky.

The reason I suspect this is a float vs double issue is that if I make the satellite 1 km in size (instead of 1 m), I get none of these rendering artifacts. Since floats only allow about 6-7 significant figures, and 6378137 is 7 significant figures, I’m guessing there are some undefined values in the least significant bits. If this is the case, doubles would solve my problems.

I have tried shrinking the distances and sizes by a factor of 1000 prior to rendering, and it looks ALMOST correct, but there are some subtle flaws in the relative sizing of the objects, that, admittedly, most people won’t notice, but I do.

Do most drivers truncate the numbers sent to glTranslated() to floats before sending them to hardware? And does Microsoft’s software implementation of OpenGL truncate prior to processing?

Any ideas on how to make this “render nicely” would be appreciated…

Hi !

All OpenGL implementations I ahve seen have used float internally so you could assume that is the case, doubles are converted to floats bewfore going down to the hardware.

It’s tricky to do anything about it, you can render in multiple passes doing some tricks with the near/far clipping planes, but it’s pretty messy.

If you have set the far clipping plane very far away, be carefull with the near clipping plane, if you have set it to a low value you get a depth buffer that is total crap, that could be the problem for your artifacts…


Thanks! Your note that floats are used internally in most implementations confirms my suspicions.

I have also tried playing with the clipping planes, but unfortunately, I am stuck with bad depth buffering because of the distances between objects. (If I change the distances, I change the orbital behavior).

I have been trying to learn Microsoft’s Direct3D as well, and the book I have on that standard refers to “w buffering” as well as “z buffering”. The book states “The benefit [of w buffering] is that this lets programs support large maximum ranges while achieving accurate depth close to the eye point. A limitation is that it can sometimes produce hidden surface artifacts for near objects.” (I don’t use Direct3D, though, because it uses a left-handed coordinate system – it is really a pain to make rotations and cross products work in a left handed coordinate system).

Does OpenGL use w buffers in this way? A quick look through the postings talked about w buffers, but they seemed to be tied in with stencil buffers and shadowing, areas which I don’t understand myself.

If anyone could point to other applicable discussion threads/resources on OpenGL w buffering for large max ranges, it would be very helpful…

What size of a depth buffer are you using? If you are using a large number for the far clipping plane, you are likely seeing z-fighting.

For example, if you have a 16-bit depth buffer, it can hold at most 65535 unique values. Assume for a minute that the depth calcultion is linear. (It’s not, actually, but for this explanation it’s easier to assume so.) Now say you set the near clip plane to 0.001, the far clip plane to 65535. Now if you ahve something at say 100.5 and 100.6, you are going to have z-fighting because your depth buffer will only be able to differentiate in increments of 1.0.

I agree with deissum, this is probably z-fighting. Set your near clip plane to a larger number and it should be improved.

[This message has been edited by ioquan (edited 02-10-2003).]

Thank you for the responses.

I would agree with the assessment that this is z-fighting, but the problem is that if I set the near plane to a bigger number, I won’t see the object in my near view. What I have is a big gap with nothing in it between my near and far planes.

One recommendation I have heard is to do a multipass render, but I don’t know how to do this, and I don’t know if it will even solve the problem. Does this involve setting two separate viewing frustums (one near, one far) and drawing the appropriate object in each of these frustums prior to swapping buffers? And if this is the case, won’t it mess up my viewpoint? Thank you…