Nvidia driver performance glitches, is it a bug?

I’m working on a real-time application and I’ve observed some periodic performance glitches (0.5ms) in the rendering with the nVidia driver. I’ve reduce the problem to a simple application. The application is drawing a given number of VBO (500 in this case) with a basic shader binded.

I’ve benchmark with a GTX280 and Quadro5800 on Windows XP. When I do my benchmark, I kill all services that are not useful including explorer.exe. I use vertical sync. My process is real-time and I make it run on a specific CPU.

Basically, my simple test application is doing this:
For each frame:
– Draw
– glFinish
– SwapBuffers
– glFinish

I benchmark all these calls and I’ve got the following graph:

Every 840th glFinish, there is a glitch. When I monitor the kernels systems calls, I can notice some NtGdiDeleteObjectApp calls on the glitchy glFinish. The process never lose the CPU, it looks like if the CPU is doing something different.

There is no glitch if I run the same testApp on a system with an ATI card.

I’m wondering if everyone has this problem with nVidia drivers.

Anyone know if there is a way to control this glitch?

My test app is available there:

jpchristin, what driver version is this?


I have try 177.41, 180.48 and 182.08.


Thanks, we’ll take a look. Does the glitch go away if you remove one or both of the glFinish calls?


The glitch is still there, even if I do not call glFinish.

In my test app, I’ve created a function which simulate glFinish:

// This function simulate a glFinish by waiting for a timer query to return
// It uses active pooling as we assume glFinish is doing
void SimulateglFinish()
	GLuint queryDraw;
	GLint available = 0;
	GLuint64EXT timeElapsed = 0;
	glGenQueries(1, &queryDraw);

	glBeginQuery(GL_TIME_ELAPSED_EXT, queryDraw);

	// Wait for all results to become available
	while (!available) 
		glGetQueryObjectiv(queryDraw, GL_QUERY_RESULT_AVAILABLE, &available);
	glDeleteQueries(1, &queryDraw);

One thing I noticed on my last PC that having nview desktop manager enabled caused very eratic stuttering performance (this was happening over 2 years thus quite a few driver iterations)

If youve got that enabled try disabling it

Thanks for the hint, but when I do my benchmark I close every services that are not critical or needed.

What about the NVidia control panel settings ? Try to disable the “threaded optimization” of whatever it’s called.

I’ve done my benchmark with and without the threaded optimization.
The graph shown was done with the threaded optimization set to OFF.
The glitch is bigger when the threaded optimization is set to ON or AUTO.