SDL_GL_SwapWindows() only works intermittently on one of my machines

The problem is that SDL_GL_SwapWindows is only updating the screen intermittently . E.g. it may redraw 10 times very quickly, but only every 3 seconds second or so. More frustratingly it works on one computer of mine and not another.

The problem laptop has OpenGL 4.5 and I’m on Windows 10. Using latest SDL 2.0.12 and glew 2.1.0.

I’ve got the problem narrowed down to a smallish example where I’m only changing the clear color. I know rendering works because I have tested it with a simple textured cube, and that displays fine.

This sample application works on one computer (also Windows, but OpenGL 4.6) but not another.

My SDL setup code is more or less as follows:


  *window = SDL_CreateWindow("OpenGL Test",
                             g_width, g_height,

  { // Setup GL
    // Disable deprecated functions
    // OpenGL 4.5
    // Use double-buffering
    enum {
      SINGLE_BUFFER = 0,
      DOUBLE_BUFFER = 1,
    enum {
    // Sync buffer swap with monitor refresh rate
    enum {
      ADAPTIVE_VSYNC = -1,
      IMMEDIATE = 0,
      VSYNC = 1

  *context = SDL_GL_CreateContext(*window);

I’ve toyed with software rendering instead of hardware, all 3 swap intervals, and single buffering, as well as fullscreen mode; to no avail.

And my main loop contains the following:

      r32 value = (ticks % 360) / 360.f;
      glClearColor(value, value, value, 1.f);

This loop runs 60 times a second (I verify this with print statements), and every iteration is always <2 ms.

The result is that I see colors grow closer to white every couple or 3 seconds with sudden jumps to a much brighter or darker shade.

If I print out the ticks counter I see those update 60/second so I know nothing is blocking or slowing down. I’ve also timed the whole loop and its always under 2 ms.

I’ve successfully run the OpenGL Extensions Viewer 6.0 benchmark on the laptop and it passes for all available versions of OpenGL. I’ve verified my graphics drivers are up to date.

I was able to get it to work by setting the SDL_GL attributes after creating the context, although I don’t know why this is the case.

My read of the docs is that all of these attributes should be set before the window and context area created, as they affect the creation of both.

Are you sure you moved them after and that fixed it?

What happens if you remove them all? I’m wondering if your attribute settings weren’t just ignored because of where you moved them. Could be that your previous settings were somehow fouling up the creation of a HW-accelerated double-buffered window/context.

Good catch. Thanks for looking into it.

I moved it back before and figured out that setting the OpenGL context to 4.5 was causing the issue. By default it was choosing 2.1, which works smoothly.

So now I don’t understand why 4.5 wouldn’t work. I’m able to get it to work with version 3.1, but no higher. Setting it to 3.2 reintroduces the intermittent update issue I was having.

I also tested version 4.6 (which is one minor version past what my laptop can handle), and that crashes as expected.

It may just be a limitation of your graphics drivers.

I’d suggest that you download and run this:

See what it tells you for the latest supported GL compatibility and core profile versions.

Also, you might upload your results to the DB, and post links to your entries. That’ll be useful to readers of this thread to make sense of your findings.

I did as you suggested and my db entries are here:

(apologies, I can’t include links.)

My laptop does support OpenGL 4.5, and I did the benchmark (with the spinning cube) in Open GL Exensions viewer 6.0 and that worked just fine.

Thanks. And on the links, new users can’t post them (to hinder spammers). However, after you’ve made a few more posts, the forums will let you post links.

I looked at your driver report. That does suggest that your driver thinks it can offer OpenGL 4.5, in either the core or compatibility profile.

After setting up your window and context, you might dump GL_RENDERER and GL_VERSION to verify that you’re getting the driver/version support that you think you are.

A few thoughts that might help you get a line on this: DWM (the Windows display manager/compositor) will drop frames if there’s insufficient perf to drive your app and DWM at the required frame rate. This is a huge annoyance. To minimize the chance of drops, render in a full-screen exclusive window.

Also (related to performance), most laptops aren’t great for graphics performance (though there are high-end exceptions). And consider that you’re trying to render with the low-end GPU built into your CPU chip, as opposed to a dedicated on-board or discrete NVIDIA or AMD chip. So I would suggest getting this working flawlessly on a desktop GPU with solid drivers, and then try to adapt to low-end GPUs and laptop-level performance.

If your laptop is Optimus, be sure to set your power settings to always prefer the high performance GPU, not the on-board Intel.

Checked and GL_RENDERER, GL_VERSION are:

Intel® UHD Graphics 620, 4.5.0 - Build

So those look correct to me.
I’ve tried full screen - no problems.

Works correctly on a high-end laptop (with a real, separate NVIDIA graphics card) with OGL 4.6.

I really don’t think this is a performance issue. My laptop is able to run the spinning cube benchmark at ~500 fps for all versions of OpenGL up to 4.5
Using the realtech-vr extensions viewer 6.0

And I’m only setting the clear color: not creating/loading shaders, not sending any uniforms or buffering any vertex objects. It’s pretty consistently only updating every 3 seconds or so.

I can probably make due with version 3.1. It seems like its still letting me use GLSL version 450, but I haven’t tested that extensively.

It’s up to you, but it’s probably worth digging a little deeper to see if you can get a line on what’s causing this thing to miss displaying some frames.

First: With the application in a mis-behaving state, and the code correct as far as you’re aware (hardware accelerated, double-buffered, etc.), run it under Very Sleepy and see where it thinks your app is spending all of its time. You could try one run with VSync on and one with VSync off.

And next, if your investigations seem to suggest that your app is rendering and submitting frames to the display system very regularly, then see what PresentMon thinks. This gives you a really good view of what’s going on with frame submission and display at the tail-end of the pipeline. This’ll tell you all kinds of useful things, including if some of your frames are being dropped (discarded) rather than displayed.

Those are very cool applications, thanks for sharing.

I gave Very Sleepy a try, seems to be spending most of its time in a few functions


Not sure about the first but the other two seem to indicate that my app spends most of its time waiting.

I ran PresentMon.
It looks like about 15 frames are dropped before one is rendered.
Sometimes, but not always, immediately after a successfully rendered frame there is a huge spike in MsBetweenPresents which is typically less than 1 ms (sometimes up to 4). The spike is on the order of 2000 ms.

MsUntilRenderComplete is always around 2-5 ms.
MsUntilDisplayed is 16-25 ms (when not dropped).

Here is a sample spike:

Application ProcessID SwapChainAddress Runtime SyncInterval PresentFlags AllowsTearing PresentMode Dropped TimeInSeconds MsBetweenPresents MsBetweenDisplayChange MsInPresentAPI MsUntilRenderComplete MsUntilDisplayed
game.exe 10276 0x0000000000000000 Other -1 0 0 Composed: Copy with GPU GDI 1 1.969471 0.379 0 0 1.018 0
game.exe 10276 0x0000000000000000 Other -1 0 0 Composed: Copy with GPU GDI 0 1.969842 0.372 16.64 0 1.037 22.964
game.exe 10276 0x0000000000000000 Other -1 0 0 Composed: Copy with GPU GDI 1 3.978931 2009.089 0 0 0.803 0

I tried to create a minimal but complete program to recreate the problem:

gist DOT github DOT com/chebert/23b1ccbe1cfc55c117c9d45e3e1d9ee1

Interestingly, if I bump the frame time delay down (shorter frames), the issue is still there, but the latency between frame being displayed correctly is shorter.

If I bump the delay time up (longer frames), the latency between correct frame displays is longer.

This indicates to me that the dropped frames seem to be linked to the number of frames that have been displayed rather than how long the frames have to render.

Back at my regular 60fps loop (16 ms frames), I put the following code in a loop

  for (int i = 0; i < 20; ++i) {
    glClearColor(value, value, value, 1.f);

And it actually runs pretty smoothly now, although the time spent is now about 1-5 ms (with some spikes of 12-20ms).

Great news!

I found a good solution.

I need to call SDL_GL_SetSwapInterval(1 /*VSYNC*/); after my call to glewInit().

After that my test in 4.5 works flawlessly.

Good deal. Glad you solved it.

Yes, the underlying functions being called by SDL_GL_SetSwapInterval() (wglSwapIntervalEXT() on Windows and glXSwapIntervalEXT() on Linux) require an active context, which glewInit() creates.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.