[QUOTE=tobielolness;1292531]I’m working on generating a periodic pattern using OpenGL, and the requirement is that all frames have to be rendered and displayed on a 240Hz screen without any drop/ skip/ tear. The pattern are really simple, so rendering time is fast and will not be an issue.
I tried using VSync to do this, (mentioned in an earlier post ), but it’s not reliable because effectively I only have a buffer of size 1, and if some process in the CPU goes on it can cause frame dropping. The way that I understand it, with glfwSwapInterval set to 1, my main loop will be clocked by the VSync interrupt…[/QUOTE]
No, in general this isn’t correct.
Your application renders frames. After they’re rendered, they go into a buffer “swap chain”. At the tail end of that swap chain, image buffers are dequeued and used by the hardware to scan out each image.
What VSync ON (SwapInterval 1) does is it prevents the driver – at the tail end of that swap chain – from changing what image buffer is currently being scanned out to the monitor/display in the middle of that image. This prevents tearing artifacts, where you’d see a piece of one image on top of the display and a pieces of one or more other frames on the bottom.
At the other end of the buffer swap chain is your application, rendering frames that are inserted into this buffer. In between your app and this swap chain is the OpenGL driver, which can in some cases queue up multiple frames of GL commands ahead of the frame being rendered right now on the GPU.
So, your process might end up running at the VSync rate. But that’s only because the pipeline of rendered frames in the swap chain and queued commands in the driver is completely backed up into your render loop. It’s not because there’s no pipelining between your app and what frames are displayed.
Now there are settings in your driver with which you can control this pipelining. I think you already mentioned you are using an NVidia GTX 1070. It’ll help in this discussion if you provide what GPU driver, and OS you are working on here. For instance, NVidia’s GL driver provides some control on both the number of image buffers in the swap chain (as well as whether the buffers in the chain are accessed as a FIFO or not) and how many frames of GL commands it will read (buffer) ahead of the current frame.
There are also things you can do in your application to limit this pipelining. But above, I’m just talking about the default behavior.
What I want to do is to send more than 240 frames per second, and store the extra frames in a buffer, so that if the CPU fails to send a frame in time, the buffer can ensure the pattern continue without glitch.
That’s where this pipelining in the GL driver and the swap chain can help you. The tradeoff is you have to be able to deal with inconsistent per-frame latencies on your draw thread. But for your use case, you may not care about that.
I’m wondering if there’s a way to combine buffering and Vsync, so that the pattern can appear at precisely 240 Hz without tearing/ frame dropping.
Given sufficient performance, sure. Start at 60Hz. Get that solid. Then rinse/repeat for 120Hz and then 240Hz. Also make sure to choose a load (at least initially) that your GPU should be able to easily fit within a 4.16ms frame time.