Near and far planes with sizes from 1e-6 to 1e+6

Hi All,

Here is a quick question:

We are adjusting the near and far planes according to the model size and pulling near 1% of the model depth closer and pushing far the 1% farther as a safety factor.

This works pretty fine.

Only one issue occurs. With orthographic projection (with perspective one is working fine) and very tiny models (around 1e-6) we get a near | far like 99.99999 | 10.000001 and the model gets cut and flickers during rotation.

What are we doing wrong?

Thanks,

Alberto

Keeping near and far clip-planes close to each other is good practice, but everything has its limits.

The coefficient Po[2][2] of the Ortho-projection matrix is equal to [-2/(far-near)] and it defines Z-value of the “projected” vertex. To be more precise, Z’ (projected value of the Z-coordinate) depends on Z and W.

Z’ = -2*Z/(far-near) - (far+near)*W/(far-near)

I think previous formula is enough descriptive to find out why it happens, especially if you are using floats.

Hi Aleksandar,

I still can’t figure out how to specify near and far planes for model sizes that go from 1e-9 to 1e+9 in orthographic projection. First of all: is it possible?

What about getting the glOrtho() source code and recode it with doubles?

Thanks,

Alberto

Of course! But I suggest you to scale such tiny models so that hey have “normal” dimensions. All coordinates multiply with, for example, 1E6, and everything will be much easier to handle. Or even better, find a radius of the bounding sphere, and multiply all local coordinates of the model with its reciprocal value (or divide with it, but multiplication is faster than division), and you’ll get a unit-size model.

Sooner or later you will need to implement your own Ortho-matrix, because all matrix manipulation functions are deprecated. I, myself, prefer to do all calculations with doubles, and before sending to GPU convert them to floats. Keep in mind that most of modern GPUs deals only with floats.

Another limit is a 24-bit precision of depth buffer. So, to avoid all possible problems with the precision, scale your models and keep doing the same way you did before… :wink:

Aleksandar,

Of course you can scale the model, but we want to know how far we can go without scaling and know what are opengl (floats) limits. Do you know what is the small and big limits of OpenGL in Orthographic projection?

I can’t remember where to find the glOrtho source code, can you point me to it?

Thanks again,

Alberto

OpenGL Programming Guide, Appendix F. Scroll to the end. It just loads this matrix.

Alternatively, see the Mesa3D source code.

Keep in mind: 1e-6 … 1e6 is ~12-13 decimal sigfigs. You only get ~6-7 from float. And that’s only if you compute the number in doubles (with well conditioned functions) and only thunk down to float at the end. If you do your compute in float even with well conditioned functions the most you can expect is ~3-4 sigfigs.

Thanks Dark Photon,

Do you see? Therefore an OpenGL limit exists. Even using doubles how far can you expect to go (as model size range):

From 1e-3 to 1e+3 or
From 1e-6 to 1e+6 or
From 1e-9 to 1e+9 or
From 1e-12 to 1e+12

I simply wanted to get an idea of this to know when we are close to the OpenGL platform limit or to our program implementation limit.

Thanks again,

Alberto

Everything has its limits! :slight_smile:
But what you have mentioned is not a limit of OpenGL but of floating-point arithmetics and graphical hardware constraints. Try to use DX and you will have the same problems. :slight_smile:

Aleksandar,

I wanted to say:

Do you see? Therefore an float limit exists. Even using doubles how far can you expect to go (as model size range):

From 1e-3 to 1e+3 or
From 1e-6 to 1e+6 or
From 1e-9 to 1e+9 or
From 1e-12 to 1e+12

Thanks,

Alberto

Hi Dark Photon,

By trial and error I found the following formula but I am not yet satisfied:

switch (projectionMode)
{
    case Orthographic:
      cameraMinZ -= 1e+3 * modelDepth;
      cameraMaxZ += 1e+3 * modelDepth;
      break;
}

Where I can find a more mathematical approach in setting the near and far planes for Ortho projection that work for very small to very big models?

Thanks,

Alberto

You are loosing your time, but keep going if you like.

I still don’t understand why you are trying to keep original measurements because everything is against that:

  1. You display only one object at a time in the scene.
  2. Whatever you try will give worse result than scaling the model.
  3. Even picking in “scaled” space is as cheap as in original space (one multiplication more means nothing compared to other transformations).

Our customers have access to raw vertex data and we cannot tell them to apply a conversion factor all the times.

I know how easy is to scale everything to a 1 unit sphere but today we cannot do like this any more.

If only I had a hint on how to position those planes in the best way I would be satisfied.

Thanks,

Alberto

You didn’t understand me. Users can enter a model even with diameter of 1e-100. Who cares about modeling units. But when you load that model, YOU have to translate it (on-line) and display in proper values. User should never be aware of that scaling.

Not in our case sorry…

Can you explain to me what exactly your users have to do with vertices, and how you cannot hide scaling from them? Is it a programming tool?

Yes it is, thanks for the help so far. We will make more experiment to get the best near and far planes for any model size.

Alberto

I wish you luck! :wink:

Aleksandar,

I wanted to make a small test following your suggestions. I do:

  1. reset projection
  2. apply projection
  3. reset modelview
  4. apply modelview
  5. GET MODEL DISTANCE FROM ORIGIN AND MODEL SIZE
  6. APPLY a translation + scaling to get the model on origin and with 1 unit size
  7. get modelview matrix and projection matrix and viewport
  8. gluUnproject()
  9. the 3D point is not correct because of 6)
  10. WHAT DO I NEED TO DO to transform this point and get the correct one? I already tried to translate and scale using the opposite of point 6) without success.

Thanks,

Alberto

gluUnproject() gave you position in your translated/scaled space. Given point must be converted back to it’s original position by inverse transformation. Because you have first translated and than scaled, you have first to scale, but with inverted coefficients, and than translate, with negative offset. Both transformations you can combine just in one matrix.

P.S. I don’t understand the purpose of steps 1 and 3. If I correctly understand their meaning, they reset matrices (set unit-matrix). There is no need for that, but that is not the problem. Just wasting operations. :slight_smile:

P.S.2 I knew that picking problem was the reason you didn’t accept my advice at first. But, you will see, it is not such complicated, and solves many problems with precision. :wink:

I did first:

scale1 as (scale, scale, scale) 

translate1 as (-dx, -dy, -dz)

combined = scale1 * translate1;

LookAt(location, target, upVector);

glMultMatrixd(combined);

Then:

gluUnproject(winX, winY, winZ, ... , x, y, z);

double point[3] = {x,y,z}

scale2 as (1/scale, 1/scale, 1/scale)

translate2 as (+dx, +dy, +dz)

combined2 = translate2 * scale2

point = combined2 * point

Shoud be fine isn’t it?

Thanks,

Alberto

EDIT: You are right glLookAt and glPerspective / glOrtho use glLoadMatrix internally.