I am using Open GL within Qt on a Linux Centos system. I may have run into a limit of Open GL. Before making radical changes, I would like an informed opinion.
Here is a essence of this question: Can Open GL manage a left clip and right clip such that there are over 1000 points displayed between the left and right and the time (or display difference) between each point is 0.0008 seconds? And maybe go further to at least 0.0004 between each point?
Some details
We do telemetry and I have created a strip chart using Open GL. Time is updated at 10,000 Hz and is expressed in nano seconds of the epoch, Jan 1, 1970. I discovered that Open GL cannot manage those numbers, so the code extracts nano seconds of the day, then makes it a double. That time is used for the X axis. The oldest time becomes the ortho left clip and the newest time becomes the ortho right clip. The all works fine with low update rate signals, maybe 100 Hz.
We need to look at signals that update at 1250 Hz. That is 0.8 milliseconds or x.0008 seconds. The problem is that not all the samples are being displayed.
Some extensive logging code was added to look at the data in the buffer, and at the point where the samples/points are drawn on the screen, where the code does this:
glVertex2d( x, y )
Every single sample expected is indeed being handed to that function, and the time delta between each is 0.00080.
I am beginning to suspect that the resolution of 0.0008 is too small for Open GL.
I am glad to show code, but it runs in a rather complex environment and is rather difficult to extract to a computer that is not configured properly.
Still, It all does boil down to a simple question. Is that resolution too small for Open GL.
The only possibility currently envisioned is something like: change the X axis, time, from :
seconds of the day, with resolution to 0.0008 seconds
To: Milliseconds of the day where the resolution only need be 0.1 or 0.01
Your thoughts please. And thank you for your time.