# Too far from origin (again)

Hi All,

We are still struggling with this issue despite the following discussion:

In the following paper:

http://www.floatingorigin.com/pubs/thorneC-FloatingOrigin.pdf

As shown in Figure 3, reverse transforming the
world can be achieved by placing a top level transform,
the world transform (WT), over the entire set of objects.
Whenever a viewpoint is selected, the inverse of the
viewpoint’s coordinate is applied to the WT. The result
is that the objects are shifted in reverse towards the
viewer who stays at the origin.

What exaclty need we to do in our code? Is there a working sample in C/C++ to study somewhere? Any tutorials on the subject?

Thanks,

Alberto

Going over the old thread, I think the basic thing you’re missing is:

Never use glTranslate.

You need to use a matrix library in your own code, and only send the final matrices to the gpu (using glMultMatrix). Also, use doubles in these matrices.

So rather than having OpenGL compute ObjectSpace->WorldSpace->ViewSpace, do the calculation yourself (in double-precision), so OpenGL only computes ObjectSpace->ViewSpace (in single-precision), which should give you much better precision.

Hope this helps.

Thanks Mikkel,

I believe that beyond the float 7 digits of precision you can do whatever you want on the CPU but always loose precision in the final step to GPU.

The only reliable solution I know so far is to subtract the viewpoint from all the viewport vertices but it’s impracticable, even more because when you decide to change viewpoint you need to do all the work again.

Thanks,

Alberto

True, but the point of doing it in double-precision is that your end result (viewspace coordinates) will suffer less from the conversion to single-precision (as they are now “2m from the eye” rather than “at this very distant position in the world”).

The only reliable solution I know so far is to subtract the viewpoint from all the viewport vertices but it’s impracticable, even more because when you decide to change viewpoint you need to do all the work again.

Your OpenGL-implementation would likely be doing all this work “again” for you anyway - so you shouldn’t be loosing any performance from it, you just have to do it yourself rather than have the driver handle it. Of course you do loose performance from doing it in double-precision rather than float though.

Mikkel,

What about display lists, shall we recompile everything all the times the viewpoint changes?

Thanks,

Alberto

Yes. You can create a display-list containing only the matrix-multiplication, and call it from your “drawObject” displaylist. That way you only have to recompile a very small list.

Side note: If you want your program to run on “everything”, you should probably consider moving forward away from displaylists.

Yes. You can create a display-list containing only the matrix-multiplication, and call it from your “drawObject” displaylist. That way you only have to recompile a very small list.

I don’t fully understand what you mean. If I need to subtract the view point from all vertices, what matrix do I need? If I put a matrix inside a display list it would also be of low precision, isn’t it?

Thanks,

Alberto

As you say: If you put a matrix inside a display list, it will be of low precision - it makes no difference. So you still need to only put the “final” transformation-matrices into lists.

What I mention about a display-list with only the matrix-multiplication in it (that you then call from your draw-display list), is to decrease the overhead of updating the lists.

Alberto,

This GLUT example demonstrates the visual jitter and one possible solution, the RTC method described in my blog post. The display() function comments describe the problem and solution.

The example is a Visual C++ solution. As it is a GLUT example, you should be able to easily convert it to any other platform. I hope it is applicable to your problem.

Wow, I’m honored to speak to the author of that amazing article on visual jitter!

I will for sure checkout your sample and come back with some questions. Thanks so much for your effort to eliminate visual jitter from all our newbie applications!

Alberto

wSpace,

I really want to thank you so much for the C++ sample provided, in a few lines of code it explain everything so well! Please consider linking it to you amazing article on the subject.

The first issue we faced during integration in our app is the recovery of the camera space (also called camera frame or camera coordinate system). Of course, using the gEye, gBoxCenter and gUp points is not correct anymore. We also tried adding or subtracting the gBox center from those points without success.

Do you know what we need to do to recover the real camera space after appling RTC?

Thanks so much again, if one day you come to Italy you have a eat as much as you like pizza dinner on me, you can bet on it!

Alberto

Something related to this…

If you want a robust understanding of floating-point,
check out this recent article, written in the context
of networking in multi-player games:

http://gafferongames.com/networking-for-game-programmers/floating-point-determinism/

Alberto, thanks for the kind words.

I’m not certain what you mean by camera space in your context. In what frame is camera space? In my example code, is it the box’s local coordinates; is it the world coordinates? Are you trying to determine the camera up vector, eye position, and box center in a particular coordinate system?

Out of curiosity, what are you using “real camera space” for?

Regards,
Deron

Hi wSpace,

What I mean is that in our program we need to create a camera-aligned bounding box of the model (with camera-aligned I mean a bounding box with the Z dir along the camera upVector the Y dir along the camera [target - eye] and the X dir alogn the cross product of the two).

This was strightforward without the RTC additional matrix multiplication using the code below:

``````public void GetFrame(Point3D& camOrigin, Vector3D& camX, Vector3D& camY, Vector3D& camZ)
{

camOrigin = eye;

camZ = Vector3D.Subtract(eye, target);
camZ.Normalize();

camY = upVector;
camX = Vector3D.Cross(camY, camZ);

}
``````

This code doesn’t work correcty anymore and we don’t know where to put our hands. We tried to add/remove the gBoxCenter from the camOrigin without success.

Thanks,

Alberto

wSpace,

It was my fault, as I should imagine there are so many reference in the code to update and shift about the gBoxCenter that at first look seemed the wrong approach, but it isn’t…

Thanks again for you help.

Alberto

Hi Deron,

To tell the truth, there is still something I cannot fully understand. Can you please try changing the following two funcs in you rtcDemo?

In practice we are loading a land in DXF format (frequently located very far from the world origin) and need to zoom close to some building details. Here is where the jitter start showing. The problem is that like we do in the code below, we have most of the land curves compiled in display lists with their big/large coordinates.

Can you please try to solve the Jitter issue using the model in the init() func below using your rtcDemo sample?

What is the best approach to get rid of jitter here?

Thanks,

Alberto

``````void init()
{
//
// State
//
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glDisable(GL_DEPTH_TEST);

//
// Box
//
gBox = glGenLists(1);
glNewList(gBox, GL_COMPILE);
glColor3f(1.0f, 1.0f, 1.0f);

glBegin(GL_TRIANGLES);
glVertex3d(gBoxCenter.x,gBoxCenter.x,gBoxCenter.x);
glVertex3d(gBoxCenter.x+2,gBoxCenter.y,gBoxCenter.z);
glVertex3d(gBoxCenter.x+2,gBoxCenter.y+2,gBoxCenter.z);
glEnd();

// COMMENTED THIS LINE
// glutWireCube(1.0);
glEndList();

//
// Set the eye position
//
setEye();

//
// Help
//
printf("press 'r' to toggle between OpenGL and RTC modelview matrix computation modes.
");
}
``````

and:

`````` if (!gUseRTC)
{
//
// OpenGL method
//
// This uses OpenGL to do the matrix math. While we are sending in
// double values, OpenGL is doing all the math in float precision.
// Because of that, we will get precision errors as float are only
// good for about 7 decimal digits of precision.
//
gluLookAt(gEye.x, gEye.y, gEye.z, gBoxCenter.x, gBoxCenter.y, gBoxCenter.z, gUp.x, gUp.y, gUp.z);

// COMMENTED THIS LINE
// glTranslated(gBoxCenter.x, gBoxCenter.y, gBoxCenter.z);
}
``````

This is not going to work - you need to change the coordinates to something that will fit reasonably into floats.

If I understand you correctly, you have a chunk of land “far away”. One way to solve the problem would be to make all vertex-positions relative to some local coordinate system, for example the center (average position) of your land-chunk. If the extents of the chunk is too big, you could dice it up in several chunks.

Hope this helps.

Mikkel,

I knew, that sooner or later I needed to subtract the viewpoint from the model vertices. The problem is that if I add things here and there I need everytime to re-compile all the objects to adjust to the new local origin…

Thanks,

Alberto

You don’t need to subtract the viewpoint from the model-vertices.

You need to create a single origin for your model-vertices (e.g. the average), and make your vertices relative to this origin.

A translation from model->world is then put in a matrix. The matrix needs to be of doubles, because the position of you chunk of land is far away from the world-origin.

The problem is we cannot change the original land coords for many reasons. The only option is to make a display list that contains the each land point subtracted from the local origin…