Ortho clip resolution limit

I am using Open GL within Qt on a Linux Centos system. I may have run into a limit of Open GL. Before making radical changes, I would like an informed opinion.
Here is a essence of this question: Can Open GL manage a left clip and right clip such that there are over 1000 points displayed between the left and right and the time (or display difference) between each point is 0.0008 seconds? And maybe go further to at least 0.0004 between each point?
Some details
We do telemetry and I have created a strip chart using Open GL. Time is updated at 10,000 Hz and is expressed in nano seconds of the epoch, Jan 1, 1970. I discovered that Open GL cannot manage those numbers, so the code extracts nano seconds of the day, then makes it a double. That time is used for the X axis. The oldest time becomes the ortho left clip and the newest time becomes the ortho right clip. The all works fine with low update rate signals, maybe 100 Hz.
We need to look at signals that update at 1250 Hz. That is 0.8 milliseconds or x.0008 seconds. The problem is that not all the samples are being displayed.
Some extensive logging code was added to look at the data in the buffer, and at the point where the samples/points are drawn on the screen, where the code does this:

glVertex2d( x, y )

Every single sample expected is indeed being handed to that function, and the time delta between each is 0.00080.
I am beginning to suspect that the resolution of 0.0008 is too small for Open GL.
I am glad to show code, but it runs in a rather complex environment and is rather difficult to extract to a computer that is not configured properly.
Still, It all does boil down to a simple question. Is that resolution too small for Open GL.
The only possibility currently envisioned is something like: change the X axis, time, from :
seconds of the day, with resolution to 0.0008 seconds
To: Milliseconds of the day where the resolution only need be 0.1 or 0.01

Your thoughts please. And thank you for your time.

You need to eliminate any offset before passing the values to OpenGL.

OpenGL uses single-precision floating-point internally, which has a 24-bit significand (including the leading 1 bit, which isn’t stored). This means that it can represent at most 224 equally-spaced values; after that, the spacing doubles. So it can represent every integer up to 16,777,216 (224) exactly, then the next representable value is 16,777,218 (224+2); 16,777,217 (224+1) can’t be represented exactly and will be rounded.

With a resolution of 800μs, you’re limited to around 3.7 hours before you run out of bits and the precision drops (i.e. consecutive timestamps may have identical floating-point values). So even using midnight as the epoch is going to be too much of an offset for most of the day (if you’d used 1970-1-1, the precision would be just over 3 minutes). So perhaps use the start of the hour as the epoch. If you need to make plots which span the hour mark, do it in two sections, changing the matrix in between.

Note that OpenGL 4.1 and later supports double-precision internally, but that would probably require a complete re-write of your code (e.g. it would require the use of a vertex shader, as the fixed-function transformations available in the compatibility profile are single-precision).

1 Like

The use of single precision is the answer. Since the app needs the resolution of a hundred microseconds, seconds of the day cannot work. the app will need to learn the current time, then start the plotting time at something like zero. For user displays it will have to translate from in internal time to the current real time.
Thank you for that information.