I am working on a geospatial rendering application using OpenGL, where I need to display and zoom into geographic data (e.g., property boundaries, maps) with high precision. The coordinates I work with are in longitude and latitude, requiring precision up to 9 decimal places.
The problem arises when I zoom in significantly—closer than 1-2 meters—because OpenGL’s pipeline uses float values, and precision loss leads to noticeable jittering or pixelation. My data transformations (translation, scaling) are performed in double, but I have to cast to float before sending it to OpenGL. This results in floating-point inaccuracies and degraded visual rendering when zooming or panning.
Here’s what I’ve tried so far:
Using local coordinates: I shifted global coordinates to a local coordinate system (e.g., centering around a reference point) to reduce large coordinate values.
Dynamic origin translation: I updated the reference point dynamically during panning to keep the rendered data near the origin of OpenGL’s coordinate system.
Maintaining double precision in calculations: All transformations are performed in double on the CPU and only converted to float at the final stage before rendering.
Adjusting the projection matrix: Modified the orthographic projection matrix to better fit the viewport for precision.
Despite these efforts, the precision issues persist, especially when zooming in closer. Here are some screenshots showing the jittering and pixelation effect:
Can not you work with double-precision floating-point numbers? OpenGL 4+ (or ARB_gpu_shader_fp64) started to provide support to them. Scalar (double), vectors (dvec*) and matrices (dmat*).
That’s not necessary, and suboptimal in that it will impair performance and reduce the set of supported platforms.
Single-precision has 23 bits of precision; that’s 1 part in 8 million. E.g. 1mm in 100 km. Geospatial data never has that kind of precision. Particularly if it’s in lat-lon (error introduced by the choice of geoid model will far exceed 0.125ppm).
The usual solution to this problem is to store data relative to some arbitrary origin point in approximately the centre of the dataset, so that you aren’t adding the distance to the Gulf of Guinea to the numbers and losing precision as a result.
If your dataset spans a huge area, the solution is to split it into chunks, with the data in each chunk referenced relative to the origin of the chunk. Any rendering will either only use a small number of chunks surrounding the point of interest, or (for a large-scale view) won’t require millimetre precision (your monitor simply doesn’t have enough pixels for that).
I could not figure out exactly how you are doing it. Anyway, I think you should recenter your doubles to float capacity boundaries before you transform it, because if you cast the doubles after the transformations whose its results are already too near to or reaching the float bounds, the precision had already been lost. It considering that floats are progressively less precise going away from zero.
Pass MODELVIEW transforms to GPU as FLOAT (as usual). However…
To compute MODELVIEW on the CPU, use DOUBLEs.
Represent MODELING and VIEWING transforms on the CPU using DOUBLEs.
This relates to what I believe GClements is suggesting:
The issue is that the MODELING transform for a “chunk” and the VIEWING transform for the frustum contain huge translations, that need DOUBLE precision to represent accurately. Compute MODELING * VIEWING = MODELVIEW on the CPU with DOUBLEs. And when uploading MODELVIEW to the GPU, convert to FLOAT.
Why does this work? Near the eyepoint, the MODELVIEW translation has a relatively small magnitude. Which can typically be represented using FLOATs to sufficient accuracy. In other words, near the eye, the huge translations in MODELING and VIEWING largely cancel each other out for objects near the eye, leaving you with relatively small numbers for the MODELVIEW translations.