An 'Intel(R) HD Graphics 4000' War Story

We have a commercial Photoshop plug-in that uses OpenGL for both high responsiveness preview display and also final rendering of the user’s results. We use GL_RGBA32F for all internal data.

Our last release worked on all kinds of systems all over the world. So well in fact, that I remember thinking, "Gee, OpenGL has finally arrived as a serious, professional graphics subsystem that just works.

Then a customer from Australia reported that his system, with a decent 4 core Intel(R) Core™ i7-3667U CPU @ 2.00GHz and running Windows 10 v1703, ran so slowly as to be utterly unusable. OpenGL-based operations were literally taking tens of seconds to complete, but only for the 64 bit code, not the 32 bit code built from the same source.

Logs didn’t show any obvious reason for the problem, so we embarked on a quest to clean up our OpenGL implementation, to ferret out every little resource overuse or wrong or redundant sequence of commands. We turned the debugging output all the way up and set out to eliminate all the warnings - and hopefully find and fix what we were doing to irritate the little Intel GPU in the process.

Testing with nVidia and a different Intel GPU showed a clean bill of health and full functionality.

Since we don’t have an Intel HD 4000 GPU to test with here we had to have the customer try our new software builds and send us logs. The customer is on the other side of the world, so the cycle time was typically one day. Fortunately he was very patient and careful.

Long story short, we figured out that the Intel HD 4000 absolutely HATES glFinish(). It causes long internal timeouts. Even with just ONE glFinish command in our rendering loop it would cause havoc. So I recommend: [b]

Use no glFinish() commands to synchronize the GPU and CPU with the Intel(R) HD Graphics 4000.[/b]

We also learned that it will fail glReadPixels calls if you set glReadBuffer and/or glDrawBuffer to GL_NONE at some prior point in operation, even if you have the proper things bound at the time. [b]

Don’t set glReadBuffer or glDrawBuffer to GL_NONE with the Intel(R) HD Graphics 4000.[/b]

Finally, we’re still seeing a number of these emitted when we destroy the rendering context (possibly one per shader):

From OpenGL API, in Rendering Context 00040006: OpenGL API Error
    Debug Frame = 'None'
    ID = 1281
    Message = GL error GL_INVALID_VALUE

Errors during cleanup don’t affect the customer experience, but we’re still going to think through what could be going wrong and try to correct it.


Noel, you saved me. Thank you so much. I’ve also just been debugging an identical issue with an Intel HD 2000 card (from a user in Australia!), and thanks to log statements, I was able to zoom-in on glFinish(). Also, APITrace showed that the context destruction was riddled with errors.

Even more intriguing, I realized that “touching” the main UI right before glFinish() (e.g. updating a RichEdit control which was showing my log statements) made everything work smooth.

So googling up “opengl intel card glfinish taking 5 seconds” sent me here, and my heart rejoiced at seeing I was on the right track!

Thank you again, now I’m on to disabling glFinish on these cards. What’s the best way to detect the card, in your opinion? Checking GL_RENDERER?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.