Too far from origin (again)

Usually AutoCAD drawings is in some units (m, cm, mm, inch, …) Why dont you convert units to some more usefull to your application. Mult or divide all vertex positions from ACAD drawings during import.

Next thing you can do is to adjust your perspective matrix. Dont put near = 0.001f, far = 1000000.0f. Instead of that find closest and farest object from camera and adjust near and far according to those values. Learn to love your z-buffer.

Im not sure is this helpful for you, because I dont understand your problem, except that you have to deal with objects far far away from camera. What is exactly your problem? Z-fighting? Selection & picking? Lighting?

Hi yooyo,

Thanks for joining the discussion.

The problem is ‘shaking’ entities when zooming very close to some detail of a land far away from the world origin.

And yes, we adjust near and far planes based on the vertices inside frustum.

After discussing this topic so many times it looks like the best approach is to recmpile object display list on demand subtracting the best local origin from the objects vertices…

And if you remember the very first advices, it was exactly they stated. :slight_smile:

The only solution that really works with huge numbers is GPU RTE. I’m very grateful for Deron’s blog!

Aleksandar,

The small difference I didn’t grasp is the fact you don’t change the geometry coords but only what you pass to the display list.

Thanks,

Alberto

It’s good that he wrote it up, but it’s a lot of verbage for this:

Use doubles to compute your MODELVIEW on the CPU, then and only then thunk down to float and give it to the GPU.

Why does this fix it? MODELING transform contains a “huge” translate. VIEWING contains a “huge” negative translate. Multiply to get MODELVIEW, and for things close to the eye, these translates largely cancel. Float loses too much precision with this big_number1-big_number2 game, so use doubles. Doesn’t “solve” it, but pushes your error out to where it’s often acceptable for earth scales.

Actually, neither one, assuming you build geometry-only display lists. Same display list. You just change how you compute the MODELVIEW transform that’s active when you render the display list to ensure greater precision.

DarkPhoton,

We need to keep the vertices in memory as big numbers (1000000), this is why I say we need to compile display list subtracting this big number from the eye point.

Thanks,

Alberto

You didn’t get the point, Dark!
There is a difference in the way the coordinates are generated. They are split into two floats, so it is not the same display list (or VBO in my case). First number serves for large distance viewing and has almost no contribution to objects that are near to the viewer. The second one is a fraction that has no contribution to distant objects but is only relevant for close ones.

I agree that this method creates errors in the real values of the coordinates, but who cares if it is not visible?

It is not a problem, because you have to rebuild DLs only when coordinates are changed, and I hope it is not very often. In any case, if coordinates are changed you should rebuild it. My advice is to keep “local” (small) coordinates and “add” displacement only for displaying/storing (I mean displaying in the interface while the cursor is being moved across objects or something similar).

How about keeping the vertices relative to a local object origin, and have that object positioned into the world by a MODELING transform. Then the vertices and thus your display list never need to change.

But maybe I’m missing something about your problem… (?)

Doing that in realtime however is problematic. Display list compilation is expensive!! Snap some timing calipers on it and see!

Dark Photon,

Yes, I was thinking about offering a regen commnad when jitter appears to allow a full regen to the new best local origin, what do you think?

Thanks,

Alberto

Sorry, I’m just not getting the purpose of the whole regen thing. Just use the same one.

More detail:

  1. [li] Take each one of your scene entities. If you center it about the origin, can you represent the vertex positions with sufficient accuracy using float. No? Stop. Reorg your entities such that this is the case. When done, each entity has a local origin.[] Now, position these entities into the world using a MODELING transform in double (float64). Don’t worry about big numbers at this point. Chances are you don’t care about millimeter precision for something the scale of the planet.[] Now position the eye into the world using the VIEWING transform (actually inverse VIEWING, but you get my drift) in double. Again, don’t worry about big numbers. Same caveat.[*] Now when rendering the scene, multiply your MODELING and VIEWING matrices in double on the CPU, and only then once you have the aggregate MODELVIEW for a batch in double do you hand it to the GPU and let it thunk down to float32.

If it’s not static, you probably would be using VBOs instead of display lists anyway…

There are other approaches too – using ints for instance. See below.

Further reading:

How about keeping the vertices relative to a local object origin, and have that object positioned into the world by a MODELING transform. Then the vertices and thus your display list never need to change.

But maybe I’m missing something about your problem… (?) [/QUOTE]

You are correct. If you maintain viewing and model matrices with high precision and have local object space coordinates, then the modelview result automatically produces a transformation matrix with low numbers for objects around the viewer. Object space numbers then transform to eyespace with high precision and never see large numbers.

You are not missing anything, this is the right way to do things, and with shader based implementation and software matrix stacks passed in as uniforms there’s nothing left for developers to complain about here.

Dark Photon, Dorbie,

I like your approach and a small C/C++ sample would be great to understand completely this approach. Do you know where I can find one?

I find it diffcult to understand the concept because I have always lived with only PROJECTION and MODELVIEW matrices not with PROJECTION, MODELING and VIEWING.

Let me check if I understood well:

  1. I have the Pluto class compiled with coordinates that refers to the planet center in the drawList display list
  2. the pluto class include a MODELING matrix that I use with glMultiplyMatrix() to place the planet to its solar system position
  3. PLEASE HELP ME HERE !
  4. I multiply the pluto class MODELING matrix with the VIEWING matrix at point 3) on the CPU and load it as GL_MODELVIEW for each object in the scene.

Thanks,

Alberto

Check out the references I posted. They may have one. But it’s really not hard. Just do not use OpenGL for your matrix math. It only supports float32 internally, and that’s what kills you. Do your own math on the CPU, in double where necessary.

I find it diffcult to understand the concept because I have always lived with only PROJECTION and MODELVIEW matrices not with PROJECTION, MODELING and VIEWING.

Very simple really:

* clip-space = PROJECTION * MODELVIEW * object-space
* clip-space = PROJECTION * (VIEWING * MODELING) * object-space
                                     ^ world-space is here!

It’s the big world-space coords that’re killing you.

Note that here I use OpenGL’s operator-on-the-left notation.

Let me check if I understood well:

  1. I have the Pluto class compiled with coordinates that refers to the planet center in the drawList display listp

If you’re never gonna get close to Pluto maybe. But if you’re gonna be flying the surface, that’s doubtful. Can you really represent Pluto to the accuracy you need with float32 precision vertex coords? Pluto is 2400km in diameter! With float32, you’ve got maybe ~1 meter accuracy. For flying the surface, you’re probably going to have to bust Pluto up.

  1. the pluto class include a MODELING matrix that I use with glMultiplyMatrix() to place the planet to its solar system position

No! Absolutely not! Again, don’t use OpenGL for your matrix math (it’s glMultMatrix* BTW). It only supports float32, and big numbers + float32 kills your available precision. This causes the jitter.

If you use doubles to compute your MODELING and VIEWING matrices on the GPU, the only GL MODELVIEW matrix API you should use is glLoadMatrixd.

… or glMatrixLoaddEXT.

Touché! :wink: Yeah getting rid of selectors is a good thing for readability and reusability (EXT_direct_state_access)

You are correct. If you maintain viewing and model matrices with high precision and have local object space coordinates, then the modelview result automatically produces a transformation matrix with low numbers for objects around the viewer. Object space numbers then transform to eyespace with high precision and never see large numbers.

You are not missing anything, this is the right way to do things, and with shader based implementation and software matrix stacks passed in as uniforms there’s nothing left for developers to complain about here.

Do you really don’t understand or don’t want to understand? :frowning:
Everything what you have said is known since the dawn of computer graphics, and nobody denies that. But, there are some cases when it is expensive to rebuild lists, buffers or whatever technology that is used. Building a whole planet is such case. I still firmly claim that the proposed method is VERY useful in some particular cases (not for CAD drawings, certainly), and that cannot be reproduced in fixed functionality.

I’m sorry for the late answer…

Dark Photon,

In my example Pluto is something far away from world origin not necessary the real Pluto planet. Our problem is to zoom close to a CAD building plan far away from the world origin (something like 800000, 1999900, 0).

Please make a one triangle pseudocode for me, I still find it difficult to fully understand without seeing some code lines…

Thanks so much for sharing your experience.

Alberto