Jumping lines during rotation

We work with doubles and at 500000 we experience the same issue.

Thanks.

Alberto

GPUs don’t use double precision for OpenGL yet.

Yes, I think so…at least that’s what I’ve found in my code. I hope I did that right.

Yeah, I believe Xmas is right…I wish there was some flag to set that to true but I’ve not heard anyone mention it yet and I too was surprised that OpenGL converts to floats internally…especially since I’ve heard of people mention how the moved to doubles and improved their “worlds/scenes” - I need to get more info from them.

I would highly suggest you do what I did: basically translate every object in your scene by the 3D location of the object you want to focus on using gluLookAt. So for example, in my case I have an earth-teapot (or vehicle) system and in one case I want to focus on the Earth so I leave the Earth at 0,0,0 and the vehicle at 10000,10000,10000 - rotations are smooth. Then, I decide to focus on the vehicle and so I subtract the 10000,10000,10000 from the positions of the Earth and the position of the vehicle and so the vehicle itself will be at 0,0,0 and the Earth will be at -10000,-10000,-10000 and my gluLookAt might only change how far the eye is from the look point since the vehicle is smaller and so I want the eye to be closer but the look point will remain at 0,0,0 - in this case the rotations are also smooth.

If I don’t do the above then I would have gluLookAt for the vehicle rotation case be something like gluLookAt(10001,10001,10001, 10000,10000,10000, 0,1,0) and that introduces jitter because (I believe) the view is essentially being rotated around the 0,0,0 point with gluLookAt - I mean the gluLookAt multiplies the view matrix and at 10000,10000,10000 the error is much greater than at 0,0,0 (or near it).

One word of caution…don’t simply do a global glTranslate(-10000,-10000,-10000) on the entire scene thinking you can kill two birds with one command because this didn’t seem to work for me…and I think this again is a result of a matrix multiplication that occurs internally via the glTranslate() command that brings in the same inaccuracy into play. So, make sure to subtract the distance internally for each object before you apply their individual glTranslate() commands and it should work…it did for me.

Also, one other thing to clear up…the both cases in the demo that I created use the gluLookAt() at 0,0,0…ie, the look point is at 0,0,0 and not at the location of the vehicle (say at 10000,10000,10000). The “non-jitter” case moves all objects by the offset distance so the teapot ends up at 0,0,0. The “jitter” case use a global glTranslated() call to move everything by the offset distance which effectively moves the teapot to 0,0,0 but because the glTranslated() call multiplies everything by -10000,-10000,-10000 translation then the precision loss is still there. This jitter case is exactly what happens if I didn’t move any of the objects and simply did a gluLookAt() at the teapot while it was at 10000,10000,10000. I might add that to the demo just for completeness…give me some time.

I updated the demo: TestRotations.jnlp

Now, pressing ‘L’ toggles through the 3 modes and the corresponding gluLookAt() values are displayed at bottom left.
0) No jitter - each object translated explicitly

  1. Jitter - global glTranslate() call
  2. Jitter - No translations, gluLookAt() points at object at the specified location

btw, if anyone can confirm I did things correctly I’d appreciate it…also this gives me a chance to bump this thread :slight_smile:

Z-Knight,

I was thinking to glTraslate() the model and to setup the camera with:

gluLookAt(1,1,1, 0,0,0, 0,1,0);

Do you think it won’t work?

Thanks,

Alberto

I don’t think so…I tried this and if you run my demo that is one of the cases…case (1). In this case I tried to use just a global glTranslated() that would translate the entire scene by the offset distance of the object that I want to focus on and I still had the jitter…it wasn’t until I explicitly subtracted the offset distance from each object before translating it, that it finally worked: case (0).

Basically this case (1) did not work:


    glTranslate( -offset )
        glPushMatrix()
           glTranslate( earthDistance )
           DrawEarth()            
        glPopMatrix() 

        glPushMatrix()
           glTranslate( teapotDistance )
           DrawTeapot()
        glPopMatrix()

So I had to do the following instead case (0) and the jitter went away:


        glPushMatrix()
           glTranslate( earthDistance - offset )
           DrawEarth()            
        glPopMatrix() 

        glPushMatrix()
           glTranslate( teapotDistance - offset )
           DrawTeapot()
        glPopMatrix()

where ‘offset’ is the distance to the object you want to view…for example, if you wanted to view the teapot then ‘offset’ would be equal to ‘teapotDistance’ and so the glTranslate() before DrawTeapot() would effectively be 0,0,0

I’d like to add some thoughts to my post above…I think the reason the second pseudocode is correct (case 0) is because in the first code (case 1) the global glTranlate(-offset) is essentially a matrix multiplication of the modelview matrix, and if you have a large offset distance then this means that you are doing a multiplication with large values that exceed the floating point precision and hence you lose some overall precision and thereby get the jitter that we see.

In the second case (case 0) the ‘-offset’ is being applied directly to each object prior to the actual call by glTranslate() and hence the focused object (the object we want to be at 0,0,0 and be the thing we look at) is not translated by a large distance and hence the matrix multiplication on the modelview matrix doesn’t cause a loss of precision…yes there will be a loss of precision on drawing the Earth because you would be applying a large value to the glTranslate() call that multiplies that modelview matrxi before the Earth is drawn, but the Earth is not the object we are focusing on and hence it gets moved far away and so we won’t be seeing the effect of the loss of precision on its drawing.

those are just my $0.02…I could be completely wrong but this sort of makes sense to me.

You could, but no, it’s not that hard:

small numbers * Modeling = big numbers * Viewing = small numbers
small numbers * ModelView = small numbers

Small numbers = finer precision, big numbers = courser precision. You want to stay in small numbers anytime you’re doing float math and want to maximize precision.

Solution: don’t use OpenGL to do your matrix math. Do it yourself in double precision, then just toss OpenGL a matrix now and then (glLoadMatrix).

This is a simpler version of what you described. The key difference being instead of the object you want to focus “on” being the translate, it’s the object you want to focus “with” (the eyepoint) is the translate (trace it through – you’ll see what I mean). Cheap and easy.

You could, but no, it’s not that hard:

small numbers * Modeling = big numbers * Viewing = small numbers
small numbers * ModelView = small numbers

Small numbers = finer precision, big numbers = courser precision. You want to stay in small numbers anytime you’re doing float math and want to maximize precision.

Solution: don’t use OpenGL to do your matrix math. Do it yourself in double precision, then just toss OpenGL a matrix now and then (glLoadMatrix).

This is a simpler version of what you described. The key difference being instead of the object you want to focus “on” being the translate, it’s the object you want to focus “with” (the eyepoint) is the translate (trace it through – you’ll see what I mean). Cheap and easy.
[/QUOTE]

Are you saying that I should replace any of my glTranslate(), glRotate() etc with my own code to the matrix multiplications, but of course using double precision? And then I won’t have to translate things to the 0,0,0 position for viewing and instead they could remain at their far distances but because I used double precision to calculate their translations, etc then I won’t have the jumping of the objects?

Or were you referring to using my own matrix calculations for setting the view matrix?

I think I follow your concept in general and I agree that it would possibly be simpler or at least a good alternative solution. Thanks.

Yes. All of them.

And then I won’t have to translate things to the 0,0,0 position for viewing and instead they could remain at their far distances but because I used double precision to calculate their translations, etc then I won’t have the jumping of the objects?

Yes, but let me make sure we’re on the same page. You’ll model them in OBJECT SPACE close to the origin (small numbers = finer precision), then use the MODELING matrix to translate them into WORLD SPACE where their coordinates would be large.

The GPU and the OpenGL driver never actually stop in WORLD SPACE with these objects though, so you don’t much care that their world coordinates would be large, and would thus result in a loss of precision if you represented these world coordinates in 32-bit float in GL (either directly as vertex positions, or in a transform matrix).

Also, I’m not saying there’ll be no jumping. But the “jumping” will be limited to the maximum size of your objects and “double”-precision, not the size of your objects and “float”-precision.

Or were you referring to using my own matrix calculations for setting the view matrix?

Not just the VIEWING matrix, the MODELVIEW matrix (both MODELING and VIEWING matrices, and their product).

Recall:

(object space) * MODELING = (world space)
(object space) * (MODELING * VIEWING) = (eye space)
(object space) * MODELVIEW = (eye space)

Compute MODELING and VIEWING in double precision, smoosh them together in double precision, then hand to OpenGL as the MODELVIEW matrix via glLoadMatrixd.

Because the “big numbers” only results from dropping into world space, tell the GPU to skip world space and never go there! Then you don’t need to worry about big numbers truncated by dropping into float precision.

I like this approach and it is probably the easiest to implement, I think.

glTranslate, gluLookAt, etc. source code is available somewhere on the internet (I can’t remember where). It will be easy to add myTranslate, myLookAt that do exactly what OpenGL versions do.

Maybe, replacing the gluLookAt() function can even be enough, what do you think?

Thanks,

Alberto

No, that’s only the VIEWING transform side.

Let’s think about where the problem occurs.

You model the object (a ball let’s say) in object space. Say the coordinates are within 10 units (call them meters, for this example) of the origin. With float precision, you get ~7 digits of decimal precision, so you get maybe ~0.00001 meter or 1/100th of a millimeter precision worst case for those verts.

Then you build your MODELING matrix using OpenGL matrix functions (in float, of course) to shift this object into WORLD SPACE over at 100000, 100000. Again, 7 digits of precision so those values are only accurate to around .1 meters (10 centimeters). Oops.

And of course before this you stacked your VIEWING matrix on the MODELVIEW stack using OpenGL matrix functions (again, in float). Let’s suppose you’re eyepoint is over near that ball. So your viewing transform has a translate of about -100000, -100000. Again, 7 digits of precision so those values are only accurate to around 10 centimeters (oops).

Now in infinite precision, these big translates add to give you precisely the right answer. But given their large magnitudes and floating point round-off, the sum (because we used float to do this) is only accurate to about 10 centimeters. So MODELVIEW’s transformation from object space to eye space is only accurate to about 10 centimeters. Man, that sucks!

So, compute MODELING and VIEWING in double precision (~15 digits of decimal precision), smoosh them together in double precision, and then thunk down to float precision only when you hand the matrix in OpenGL. As you might guess, your “big translate numbers” have to be a lot bigger (1e8 bigger) to generate the same inaccuracy in the results.

(Note that I oversimplified this example for clarify, glossing over epsilon, sqrt of machine precision, well-conditioned functions, etc. where you might actually only get 1e4 accuracy from float instead of 1e7 even if you used “nice math”, but you get the idea.)

good info, really good info…I’m going to have to try this later on as for now I’ll use my current solution for the time being (deadlines, etc). But I learned a lot…thank you very much for the comments.

Here is the code of the glu.LookAt() func. As you can see there are many variables (and the matrix) declared as floats.

I will first change this to use doubles, than use the matrix with gl.LoadMatrix() as suggested above.

The full source code file can be found here.

void GLAPIENTRY
gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx,
	  GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy,
	  GLdouble upz)
{
    int i;
    float forward[3], side[3], up[3];
    GLfloat m[4][4];

    forward[0] = centerx - eyex;
    forward[1] = centery - eyey;
    forward[2] = centerz - eyez;

    up[0] = upx;
    up[1] = upy;
    up[2] = upz;

    normalize(forward);

    /* Side = forward x up */
    cross(forward, up, side);
    normalize(side);

    /* Recompute up as: up = side x forward */
    cross(side, forward, up);

    __gluMakeIdentityf(&m[0][0]);
    m[0][0] = side[0];
    m[1][0] = side[1];
    m[2][0] = side[2];

    m[0][1] = up[0];
    m[1][1] = up[1];
    m[2][1] = up[2];

    m[0][2] = -forward[0];
    m[1][2] = -forward[1];
    m[2][2] = -forward[2];

    glMultMatrixf(&m[0][0]);
    glTranslated(-eyex, -eyey, -eyez);
}

Dark,

Now I have the following code:

gl.MatrixMode(GL_PROJECTION);
gl.LoadIdentity();

gluPerspective(angleOfView, aspect, near, far);
              
glMatrixMode(GL_MODELVIEW);
gl.LoadIdentity();

myLookAt(location, target, upVector);

DrawEntities();

As you can see below, gluPerspective works on doubles and the myLookAt function is my version that works with doubles and load it using glLoadMatrixd().

The result is still the same, can you explain me why? :stuck_out_tongue:

Thanks,

Alberto

void GLAPIENTRY
gluPerspective(GLdouble fovy, GLdouble aspect, GLdouble zNear, GLdouble zFar)
{
    GLdouble m[4][4];
    double sine, cotangent, deltaZ;
    double radians = fovy / 2 * __glPi / 180;

    deltaZ = zFar - zNear;
    sine = sin(radians);
    if ((deltaZ == 0) || (sine == 0) || (aspect == 0)) {
	return;
    }
    cotangent = COS(radians) / sine;

    __gluMakeIdentityd(&m[0][0]);
    m[0][0] = cotangent / aspect;
    m[1][1] = cotangent;
    m[2][2] = -(zFar + zNear) / deltaZ;
    m[2][3] = -1;
    m[3][2] = -2 * zNear * zFar / deltaZ;
    m[3][3] = 0;
    glMultMatrixd(&m[0][0]);
}

You’re loading your VIEWING transform into OpenGL via glLoadMatrixd. This will chop it down to float precision. Then in DrawEntities() I presume it appends your MODELING transform via glMultMatrix*, which also chops precision down to float and then does the multiply.

Stop. Don’t use OpenGL for matrix math. Don’t even use it to store intermediate matrices. That’ll drop you down to float precision, which “we’re assuming” is the root of your problem.

Instead, update your own double-precision MODELVIEW transform in your code. Then when you need to render a model, only then glLoadMatrixd your spiffy double-precision accurate matrix into OpenGL. That way, you wait until “after” the MODELING and VIEWING transforms have been combined to chop it down to float precision. That’s the key.

Hi Dark,

I don’t have any MODELING transform I simply draw a figure using lines with coords like(500000, 500000, 500010, 500008) and the camera is looking pretty close that area.

Do you mean I have to:

  1. Extract the gluPerspective source code
  2. Save the PROJECTION matrix in my code
  3. Compute myLookAt
  4. Save the MODELVIEW matrix in my code
  5. Combine the two (multiplication will be enough?) in my code
  6. Do the gl.LoadMatrixd(combined)
  7. Then to draw entities?

Thanks again.

Alberto

There’s your problem. OpenGL doesn’t do doubles. So when you give it those vertex coordinates, thunk, they’re chopped down to float precision, which means you only have maybe 0.1 unit accuracy.

Instead, model your object near the origin, use a MODELING transform to translate your object over 500000 (or wherever), and do your MODELVIEW matrix math in double precision all in your code.

Do you mean I have to:

  1. Extract the gluPerspective source code
  2. Save the PROJECTION matrix in my code
  3. Compute myLookAt
  4. Save the MODELVIEW matrix in my code
  5. Combine the two (multiplication will be enough?) in my code
  6. Do the gl.LoadMatrixd(combined)
  7. Then to draw entities?

Close. First, your problem probably isn’t in PROJECTION transform precision, so you can just use gluPerspective directly instead of #1 and #2.

Then do as I described above. You need to use a MODELING matrix, and stop positioning your objects at huge coordinates directly. Try to always model your objects around the origin, to maximize floating-point precision.

Just keep in mind you only get about 7 decimal digits with float. That’s it! So you can have 0.1 precision at 500000, or you can have 0.000001 precision at around 1.