Closer to texture but getting shaky/distorted

I am working on large planet terrain rendering in cubic kilometer units with GLSL shader programs. When I move closer to ground, textures are more shaky and distorted. I think possible float inaccuracies due to very small values at street level. One meter is each .0001 unit. Does anyone have any solution to emanate shaky and distorted at street level? Does higher precision float resolve that problem?

Floating-point has roughly constant relative error of 1/2^23 ~= 1.2e-7; the absolute error is proportional to the magnitude of the value. The symptoms which you describe would be expected if both the viewpoint and the vertex coordinates share a large common offset.

Modelling a planet the size of earth (radius ~6400 km) would result in points on the surface being quantised to a resolution of ~760mm, resulting in an error of ±380mm.

If that’s the issue, the solution is to remove the common offset from both the viewpoint and the vertices, e.g. storing vertex coordinates relative to a local origin and the modelling transformations given relative to the same origin, with the calculation of the viewpoint’s relative position performed using double-precision floating-point.

I wouldn’t recommend trying to use double-precision floats on the GPU. It has a significant performance cost and ultimately isn’t necessary.

Ok, I got it now. I tried to reduce planet size to normalized coordinate system (one unit = one planet radius) but got still shaky/distorted. But distant textures are more stable at horizon than closer local textures.

I tried to change near and far plane. Originally settings were near .001 and 1.0e9. I tried .0000001 and 10.0 in normalized coordinate system but it got worse shaky/distorted.

I use relative world coordinate system so that camera’s origin is always (0, 0, 0) by subtracting camera position from planet position for relative world coordinates. Planet’s origin is (0, 0, 0) and its vertices is at radius from origin.

To get rid of large offset between viewpoint and vertices, how about local view frustum by using normalized coordinates to virtual life-sized coordinates, etc? I am still looking for some techniques by eliminating shaky/distorted at local area. I am new to terrain rendering at this time.

The scale factor doesn’t matter. It makes no difference whether your coordinates are in metres, kilometres or millimetres.

What matters is the ratio of the relevant distances (vertex to vertex and vertex to viewpoint) relative to the distance to the coordinate system’s origin. The modelling transformation (typically the model-view matrix) subtracts the position of the viewpoint from the position of each vertex. If you subtract similar numbers, accuracy suffers. Consider e.g. 0.12345689 - 0.12345678 = 0.00000011. The accuracy has gone from 8 significant figures down to two. Changing one of the original values by 1 part in 10^8 changes the result by 1 part in 10.

That’s to be expected. If you move your finger by 1cm you can see it clearly; if someone a kilometre away moves their finger by 1cm you wouldn’t notice.

The far plane rarely matters; you can typically set the far distance to infinity without adverse effects. Changing the near plane changes the depth resolution for most of the scene (which may result in “depth fighting” artifacts), but doesn’t affect the X-Y position.

That’s going to be a problem if you want to get close to the planet. Realistically, you need to split the surface into smaller chunks and store vertex positions relative to the centre of each chunk. When rendering chunks, calculate the offset of the chunk relative to the viewpoint using double precision (the final result only needs single precision, but the initial values and the subtraction need to use double precision).

Ok, I updated my code by switching to all 64-bit vertices, etc… Converted to 32-bit floats for rendering/vertices buffer with 32-bit model/view/projection matrix. Distorted is now gone but vertices are still shaky. LOD determination is now stable. Rendering is now faster and smoother than before. I believe some issues inside GPU space due to float errors through shader processing (limitation of 32-bit floats). A few hundred meters above ground, they look stable but landing on ground (a few meters above) is becoming shaky or “earthquake effect”.

While possible, this is massive overkill and not what you want to end up with for best performance, even if it did solve your jittering problem.

If you represent your world objects as 32-bit offsets relative to 64-bit accurate local origins, and represent your camera position using a 64-bit position, you can confine the potential transformation accuracy problems to just 1) accurately transforming their local origins to eye space rather than 2) accurately transforming all of their vertices independently. This is much less work, simpler and faster to perform (and debug), and easier on the GPU side.

In short, single-precision positions. Double-precision transforms (with double-precision accuracy of all positions used to compute those transforms).

GClements has given you some good advice here. But before you continue, I would suggest that you clearly identify the cause(s) of the jittering/shakiness that you’re seeing? Is it due to inaccuracy transforming scene positions into eye and clip space? Poor Z precision? Inaccuracy in computing texture coordinates? Poor accuracy in computing the camera position from frame to frame? Irregular frame rate? … Or a combination of these?

You might consider cooking up a short, standalone test program that illustrates your method with one simple object. With this, you can iterate with this faster, and can post the code for feedback and suggestions. Alternatively, consider posting a video of what you’re seeing.

Here are YouTube links that I uploaded. When I land on ground, vertices are becoming shaky or “earthquake effect”.

Besides the vertex positioning accuracy, it looks like your tessellation is changing from frame-to-frame, even when the eyepoint isn’t moving much. What’s up with that?

Good news!! Well, I now found a problem in model/view/projection matrix. I had to split model (world space) from view and projection in CPU space. I changed a line in GPU space:

gl_Position = gViewProj * gWorld * vec4(vPosition, 1.0);

All problems went away! No more shaky vertices! Only static errors now remains. I will work on per-tile world matrix now.

That’s good. …though surprising.

If you’re doing this how I think you’re doing this, gWorld has a HUGE translation and and gViewProj also has a HUGE translation. Neither of which can be represented using 32-bit float with sufficient precision to give you decent accuracy.

Near the eyepoint, the translations in these transforms largely cancel each other out (when gWorld is multiplied by gModel) for geometry close to the eye, leaving small translations which can easily be represented using 32-bit float. However, you lost the accuracy when you tried to represent the huge translations in these two, separate, 32-bit float matrices.

The solution:

  1. "Combine gWorld and gView (or gViewProj) on the CPU using a 64-bit float (doubles) matrix,
  2. Convert this on the CPU from a 64-bit doubles matrix to a 32-bit float matrix (at this point, you don’t need the extra bits of precision anymore), and then
  3. Provide this to the GPU using a 32-bit float matrix.
  4. Then in the shader, just transform positions by that composite transform (gWorldView, or gWorldViewProj):
gl_Position = glWorldViewProj * vec4( vPosition, 1.0 );
1 Like

Well, I studied a book called “3D Engine Design for Virtual Globe” about jitter issues (on Chapter 5) and implemented RTE method. All jitter problems went away. Terrain is very smooth (no shaky) when bring a camera a few meters above ground.

Great! Thanks for following up with the solution you chose. That’ll help others that read your thread to find solutions.