I am seeing an issue where GL calls that happen in parallel are causing a noticeable performance hit in my application. I have two threads, each with their own GL context that are rendering to two different windows. Under older NVidia drivers, these two renderers can render at full tilt with no problems. Now it seems that if the CPU attempts to execute two GL calls simultaneously, it takes a huge performance hit. I can confirm this by putting a static mutex around every GL call, thus preventing the CPU from making concurrent GL calls. With these mutexes in place, the application runs fine. I really don’t want to have to wrap every GL call with a mutex, and feel like the driver should be taking care of concurrency and scheduling of GL calls. It seems odd that something this simple would all of sudden be implemented wrong in the driver.
This problem arose when I switched from driver version 181.22 to the latest drivers (260.99 at the time), but this problem is present all the way back to 190/191 drivers.
I am running Windows XP with 9500GT GPU.
Does any one have an insight into this problem, or is anyone else experiencing this.
With the 190 drivers NVidia changed something inside the drivers that managed how multiple threads, multiple contexts and multiple graphics cards in a machine were handled. For instance, they broke NV_gpu_affinity on Windows7
Back then I was told that they were working on improving these features, but I don’t know if these improvements already surfaced in the publically available drivers.
Maybe some folks from NVidia can comment on that?
I agree this seems like a bug in the driver. What I don’t understand is how this could have lasted so long unfixed. It’s a year and 8 months now since the 190 drivers were released. Furthermore, I’m surprised by the lack of information I am able to find about this on the internet. I would think others would be experiencing this problem at a much higher rate than I am seeing. I guess most games are single-threaded when it comes to rendering.
I also know this bug. Try to disable “Threaded optimization” in NVIDIA Control Panel. It helped me a bit.
I have already tinkered with this setting. It does seem to help a small amount, but it in no way fixes the problem. The test I described in my original post with the mutexes was done with “Threaded optimization” off, and showed a much larger performance gain than adjusting this setting.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.