Selecting GDI buffer

Hi, im using double buffering in opengl and switch them via the function SwapBuffers(HDC).

I would like to switch to the surface that GDI can draw to. The only way I know of is counting the number of times I swap buffers and if its pair then I know Im on the buffer that GDI can draw to.

However, Im not sure if this is a correct way of doing this. I wonder if there is an API call to directly switch to the GDI drawable buffer? I looked on the OpenGL section of MSDN but found nothing to respect.

Originally posted by xanatose:
[b]Hi, im using double buffering in opengl and switch them via the function SwapBuffers(HDC).

I would like to switch to the surface that GDI can draw to. The only way I know of is counting the number of times I swap buffers and if its pair then I know Im on the buffer that GDI can draw to.

However, Im not sure if this is a correct way of doing this. I wonder if there is an API call to directly switch to the GDI drawable buffer? I looked on the OpenGL section of MSDN but found nothing to respect.[/b]
I don’t think I understand the question: GDI only draws to the frontbuffer, there’s no even/odd SwapBuffers thing.

Also, note that you can rely on GDI working on OpenGL surfaces only if the flag PFD_SUPPORT_GDI is set when you query your current PIXELFORMATDESCRIPTOR.

In general, mixing GDI with OpenGL is not recommended and should be avoided when possible. When not possible, you should use GdiFlush and glFinish in order to synchronise GDI and OpenGL rendering.

I fully agree.

There are only two sane ways to mix GDI and OpenGL.

Either by specifing the PFD_SUPPORT_GDI flag but you will loose hardware acceleration because this will force software rendering (aka the MS Generic GDI implementation).

Or you use glReadPixels (on the backbuffer after a glfinish(), or even better on a pbuffer) and bitblt that onto your GDI DC.

Everything else will cause major headaches and may not work on different systems.

Originally posted by Honk:
[b]I fully agree.

There are only two sane ways to mix GDI and OpenGL.

Either by specifing the PFD_SUPPORT_GDI flag but you will loose hardware acceleration because this will force software rendering (aka the MS Generic GDI implementation).

Or you use glReadPixels (on the backbuffer after a glfinish(), or even better on a pbuffer) and bitblt that onto your GDI DC.

Everything else will cause major headaches and may not work on different systems.[/b]
Just a note, you don’t lose hardware acceleration because of specifying PFD_SUPPORT_GDI.

For example, the following pixelformats are from a 9700 pro:

PixelFormat 1
DRAW_TO_WINDOW | SUPPORT_GDI | SUPPORT_OPENGL
Color bits(shift): 32 8(16) 8(8) 8(0)
Alpha bits(shift): 8(24)
Depth bits: 24
Stencil bits: 8
Accum bits: 0 - 0 0 0 0
Renderer: RADEON 9700 PRO x86/SSE2
GL bitdepth (RGBA): 8 8 8 8

PixelFormat 2
DRAW_TO_WINDOW | SUPPORT_GDI | SUPPORT_OPENGL
Color bits(shift): 32 8(16) 8(8) 8(0)
Alpha bits(shift): 8(24)
Depth bits: 24
Stencil bits: 8
Accum bits: 64 - 16 16 16 16
Renderer: RADEON 9700 PRO x86/SSE2
GL bitdepth (RGBA): 8 8 8 8

What doesn’t seem to be ever exposed (in this hw or in MS’s sw) is PFD_DOUBLEBUFFER and PFD_SUPPORT_GDI (as mentioned by MSDN : The buffer supports GDI drawing. This flag and PFD_DOUBLEBUFFER are mutually exclusive in the current generic implementation.).

Interessting but there is no need to talk down to me like that.

What I said is still valid:

Everything else will cause major headaches and may not work on different systems.
I tried to use SUPPORT_GDI in the past (back in the days when the GF1 was still king of the hill) and got kicked into software rendering no matter what. I never bothered to test it with newer hardware/drivers though, nowadays I would just render to a pbuffer and bitblt that if I needed GDI interoperabilty (I dont, lucky me).

It may work on your R300 but thats no guarantee it will run on a different GPU (it will most likley work on all current GPU’s and/or current drivers but that is not my point).

I dont see why you had to bring up the MSDN docs, since that is a.) a old hat and b.) is only marginly related.

Render to a pbuffer and a.) you dont loose double buffering b.) dont have problems if the window is partly obscured not to mention FFAA issues. The only drawback may be the glreadpixels() readback speed, but that highly depends on the GPU and the bus (AGP vs. PCI-Express) you are using.

Hope this clears it up.

Originally posted by Honk:
Interessting but there is no need to talk down to me like that.

I cannot see where I’m talking you down at all, anyway if I did it wasn’t my intention and I apologise.

[b]

I dont see why you had to bring up the MSDN docs, since that is a.) a old hat and b.) is only marginly related.

[/b]
The main reason I brought them up was because they explain why you wouldn’t get any PFD_SUPPORT_GDI flag if you request PFD_DOUBLEBUFFER.

Anyway, I know for sure that several year-old first tier apps use GDI mixed with OpenGL even on doublebuffer formats (which was a headache to support in the driver and also broke other things like FSAA), so the MSDN quote is not definitive with regard to hardware implementations, but it ties with the pixelformats exposed by my graphics card.

Another thing you can do is create a child window of the window that you create your OpenGL context in. This child window can use GDI as much as it wants (but it’ll be opaque). This still lets you be accelerated on all hardware, and is good for buttons, text boxes, and that kind of thing.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.