Advanced control of vertical sync.

Thanks V-man,

Your points are very much appreciated here and I personally thank you for being a very helpful and knowledgeable contributor.

Maybe GL ES, could provide a good example as how to go about updating OpenGL.

Originally posted by V-man:
eglSwapBuffers
eglSwapInterval

But yet, no eglSwapIntervalWaitMethod. Maybe they just haven’t got round to it…
The big question you’ve got to ask yourself, glDesktop, is - if a “glSwapIntervalWaitMethod” function existed in GL, why would anyone issue the following call:
glSwapIntervalWaitMethod(GL_HAMMER_THE_CPU);

I’m not deliberately trying to be anti-social here, it just winds me up that someone who’s observed a driver bug is posting in the “Suggestions for the next release of OpenGL” forum.

don’t mind knackered… he’s upset because he didn’t get the underwear with the little pink elephants for christmas :frowning:

btw in d3d you can in fact request that the driver return immediately if it can’t fulfill the present request (look at the swapchain present flags). not an explicit “method”, but it can prevent the driver from spinning, allowing the application to continue and try again later…

ATI’s drivers are perfectly valid with regards to the current OpenGL spec. And there may be slight performance benefits from the vertical sync technique that they use. But unfortunately there is a high CPU usage side effect from it.

ATI have been doing this for years, so it probably isn’t a bug, just their way of doing it.

I’d like to see the OpenGL spec extended to have a unified single set of commands that work on all platforms and also safeguards put in place to prevent high CPU usage.

I found this link:
SwapChain.Present Method

And it says:

If param_Present_flags = 0, Present spins without returning an error until the hardware is free.

What does it mean by spins?

I’d like to see the OpenGL spec extended to have a unified single set of commands that work on all platforms and also safeguards put in place to prevent high CPU usage.
Maybe you didn’t read this the last time, so here it is again:

It is not OpenGL’s responsibility or even its place to dictate performance or well-behaved behavior in a multitasking environment. You have a problem with an ATi driver, so take it up with ATi.

Korval,

You are entitled to your opinion, but I don’t see the need for you to repeat yourself. This just muddies the water. If everyone’s opinion was the same as yours, you would probably be right. But this is not the case is it.

Your comments were fully taken into account the first time. Thank you.

You are entitled to your opinion
Ah. I see the problem now. You misunderstood how I meant what I said.

It’s not my opinion; it’s a fact about OpenGL. It is what OpenGL is, take it or leave it.

The OpenGL specification does not dictate performance. It isn’t supposed to, and it isn’t allowed do. At best, it can suggest performance. CPU usage isn’t behavior that the specification can mandate.

The specification specifies behavior, not performance. It says how triangles will be rendered, not how fast they will be. It says nothing about performance, except occasionally to suggest that some features (say, VBO) might be faster than others (regular vertex arrays). Note that this is a far cry from mandating such a thing.

Simply put, the specification tells implementers what their OpenGL implementations need to do, not how they need to do it. OpenGL does not guarentee performance.

Discussing adding such an extension is pointless; it is not going to happen. It doesn’t fit with what OpenGL is. And, as knackered wisely pointed out, even if such a thing became an extension, when would you not use it? And, if so, why not simply mandate such performance in the spec and not bother with the extension at all (besides the fact that the spec can’t mandate said performance)?

I am suggesting a more advanced and unified method of control for vertical sync that is implemented in an OpenGL driver. This can then have an indirect influence on CPU usage in a system.

If the OpenGL spec has no control over the behaviour of OpenGL drivers, then something must be wrong somewhere.

Originally posted by glDesktop:
If param_Present_flags = 0, Present spins without returning an error until the hardware is free.
What does it mean by spins?

Spins means “while (!vtrace()) {Sleep(0);}”
And all that parameter means is that the application is responsible for the spinning. So, you could do a “while (!present()) calculate_A_Particles();”

D3D has an interesting feature to which I hadn’t payed attention to.
D3DPRESENT_DONOTWAIT which works with IDirect3DSwapChain9::Present

If the hw is still doing the blit operation, the D3D swapbuffer function returns D3DERR_WASSTILLDRAWING

http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/004011.html

There is a solution in there that uses NV_fence, but it’s ATI that needs this.

I’m still very keen to have this suggestion put forward.

And I’d also like some official feedback on it too, whether it be positive or negative.

I’m very serious about having this feature in the next version of OpenGL. I personally think that it is already very much overlooked and overdue.

I noticed this within some of the recent OpenGL 2.1 updates:

“ARB_synch_object derived from GL_NV_fence - but allowing sharing and using separated objects - and GL2_async_core - but a subset, with an eye to a later superset.”

Could this be used for what I am suggesting? I am hoping so. Fingers crossed.