opengl latency

I need to update a live feed in my opengl application with minimal delay … I have 3 frames delay with qadro fx3400 card … I suspect opengl to buffer frame … Is it true ? Can I do something to avoid that ? riva tuner ?

Best regards

You can of course disable double buffering and use glFinish to force OpenGL to empty the render queue, so you will get only as much delay as the rendering itself will cause. But this will hurt overall performance…

It might be better to accept a few frames delay so you can get a bit more parallelism between uploading and rendering a frame.

I need to have no delay … I have 100 ms delay between live input and nvidia card output … Is there options to disable queueing frame ?

To force emptying the command pipeline is glFlush() (better perf than glFinish).
You may keep double buffering, but be sure to disable vsync (ie wglSwapInterval(0); under windows, depends on OS, and beware of driver forced behaviour).

I have tryed to use rivatuner … I can do 0 prerender limit in directx, but in opengl I only can do 2 render limit… I have tryed with the nvidia api to set the number of buffered frame … But it seems that there is no impact in opengl .

I don’t think this is an OpenGL issue at all.
You need the first frame to capture your video. In the second frame you have your video data available for texturing. And finally you can display your rendering in the next frame.
This will give you a minimum of 2 frames delay.
Of cause a lot of details can contribute to more than 2 frames delay( alot of video grabbers need to do buffering themselves, synchronisation, etc. ).
If you are doing video capturing via MS DirectShow you will not be able reduce latency much…

If you believe it is a card problem, I would suggest contacting the developer support people for the card technology vendor. This would be NVIDIA or ATI.

I use a nvidia card… In the nvidia api there is a parametre to control the queued frame … But it seems related to directx … I don’t know if opengl buffer frame ? if this is vendor related? and what can I do … I have find that with riva tuner I can augment prerender limit up to 9 … and then the delay increase… But with this softaware I can not set prerender limit to 0 … the minimum is 2 …

glFinish is the right thing to use, not glFlush, it does no good to just glFlush if your code can continue to execute with graphics calls queued. The CPU will still run ahead.

Finish stalls the CPU until it is basically in sync with graphics and is essential for low latency operation.

Use glFinish after swap but before your screen clear (flush the clear) before executing your update code.

It’s not possible to have no latency, you do have video output from the card and at 60Hz that’s 16.7 milliseconds at least to get the pixels on the bottom of your screen out. You can increase the refresh rate but there are limits.

Increase your frame rate to as high as your display supports and make sure you don’t have tripple buffering on or anything crazy like that. Call glFinish after your swap call and update just before drawing stuff. The maximum latency won’t exceed your refresh rate. You will lose some draw time and may be subject to scheduling issues but that’s the price you have to pay, for the discerning graphics afficionado it’s well worth it. For the benchmarketeers this is a horrible scarey nightmare, but then again so is disabling vsync so that tells you all you need to know about listening to them.

This all assumes you have lowish computation & i/o overhead. If you don’t you need to get smarter about parallelizing CPU and graphics, but it’s not a trivial subject especially when you want to combine that with low latency.

Hi dorbie,

why do said there is a minimum latency of 67 milliseconds at 60Hz? One frame = 16 ms ?

As a conclusion, I need to do :
isn’t it?

I think the crucial part here is from OP’s second post in this thread:

I need to have no delay
As this is obviously impossible, I can only think of two possible solutions:

  1. Do a few frames of delay on the main stream to match the transformed/modified output before combining. That way one can create the illusion of “no delay”. This obviously only works for “one-way” systems (i.e. no “real-time” interactivity).

  2. Use a DVE that can do the transformations you need in hardware (unless you indeed need the expressive power of e.g. OpenGL transformations and modifications), and use that output. This could potentially incur as little overhead as sub-millisecond.

As for 2, I only know about one affordable thing that ever did that - the NTSC-only NewTek Video Toaster for the Amiga back in … 1985-1987?

I just now came to think of another option.
It could, theoretically (I never used it myself why I can’t say) be that some high-end e.g. SGI hardware can map an analogue input video stream onto an OpenGL quad or even more complex surface. As this is however way beyond the scope of OpenGL, I believe you could get better advice from other sources.

I really need to embed live feed with no delay in my opengl application …

67 milliseconds at 60Hz? One frame = 16 ms ?

It was a brain fart, I’ve corrected it.

This is more what I’m saying:


P.S. if you’re reading some video from somewhere you might want to do that in the PaintGL code not externally. Where you put it depends on the details of what it does and your real latency requirements vs performance.

Nothing takes zero latency as has been said, not even hardware. Anyone specifying this doesn’t know what they’re talking about.

yes I know … there is no zero latency … But It could be good If I have a maximum a two frames …(better 1 … hopes!!) …

Definitely doable then.


some news … I have tryed to use glfinish and glflush … and It’s not better … but I have a lot of drops …


Could you be more specific? Give us information about live data? (resolution, framerate, did you use PBO or PDR or just plain OpenGL, input hardware, is aubio embeded or not)

I have made application that can read data from direct show video input card, upload frame as texture and render texture using OpenGL. It works in realtime with PAL (720x576i) video feed.

If you give me enough information… maybe I can help you.


A lot of drops is not the same as the overall approach failing.

Are you expecting real-time behavior?

Getting a system running with real-time performance and streaming video requires a lot of attention to detail and extensive knowledge of what’s going on in your system.

You’re not going to get it posting one liners like it’s not working.

There’s no word on what improved, what the failure mode is like what the latency is, how you’re measuring any of this, how your video is arriving, you’ve posted less than the bare minimum.

What’s the typical latency ve tha max, what else is running on your system, how are you measuring any of this?