Acceralated mode vs. generic (soft) mode question

We have this application that uses OpenGL to display in a window.

We purposely choose NOT to use hardware-accelerated mode (PFD_GENERIC_ACCELERATED) to render our OpenGL calls for support and stability reason over performance. We are doing it by rendering OpenGL calls to a bitmap instead of Window and we use PFD_GENERIC_FORMAT only. We then ‘bitblt’ that bitmap to the window. This has been working well for us for a while and we encountered a very few of support calls regarding to bad video drivers.

Lately, we got some video drivers issues from our customers who running our application with certain Intel video cards. Our application crashes. We traced the crash to the code in Intel’s driver. The crash would go way if we upgrade the video driver or if we turn off the graphics acceleration completely at Window system level.

I am really puzzled that why our application would ever be affected by vendor’s driver issue since we only use openGL in generic mode which uses Microsoft’s OpenGL driver, not any vendor’s driver.

Can someone explain to me what my problem is? My understanding is that if you use generic mode of openGL, you are using Microsoft’s OpenGL implementation. If so, why our OpenGL application crashes in Intel’s video driver’s code? Could it be that we use OpenGL resources or calls that aren’t available from Microsoft’s OpenGL implementation and therefore it has to default to the vendor’s implementation?

Thanks!

-SharpT

Log the GL_RENDERER etc strings right after GL rendering context creation, maybe the new intel driver wrongly provide the generic format.

And what about linking directly to mesa3d dll, that way you will have guaranteed software computation ?
The mesa lib has an MIT license.

I have an OpenGL program that always crashes a Intel graphics driver when it goes into Print Preview.

The GL error returned is GL_STACK_OVERFLOW right before the crash. The max. GL_PROJECTION_STACK_DEPTH returned is 2,
so it is a really crappy card. And I don’t use more than 1, so I have no idea why it happens.

Maybe the Intel driver is using OpenGL resources without my knowledge? I don’t have any stack leaks.

depends on how deep the stack depth is yes. one workaroud would be to retrieve projection matrix via glGet but on intels this would be really slow.

“Maybe the Intel driver is using OpenGL resources without my knowledge? I don’t have any stack leaks.”
can happen if you use multiple contexts in a wrong way.

I understand that, but I never exceed the stack limit, and all of my glPushMatrix calls are followed by a glPopMatrix.
It only crashes when I go into PrintSetup or PrintPreview. I thought maybe Intel driver might be stealing my
OpenGL resources somewhow, or messing around with my RCs.

The only real thing my program can do while in PrintSetup is refresh the background. It seems to randomly crash
while changing printers or sometimes dragging the dialog.

And I try to protect the start of all my rendering functions with code that makes sure the correct RC is being used:


 if( wglGetCurrentContext() != m_hRC) {
    wglMakeCurrent( m_pDC->GetSafeHdc(), m_hRC );
 }

the check above would not help if somewhere the context is saved, set to something else and restored later. do you use any third party lib somewhere that might be using OpenGL?

“PrintSetup or PrintPreview” doesnt tell me anything. what is that?

Not that I know of. I use MFC. PrintSetup and PrintPreview are MFC functions (CWinApp::OnFilePrintSetup, CView::OnFilePrintPreview()).
RCs are fully contained within their own window class, so they are created and destroyed along with the window. All windows are created
with CS_OWNDC, so they have their own DC. RCs and DCs don’t change after the window has been created.

run your code with a GL debugger (GLintercept). it can break on GL errors and tells you exactly where in the code the error occured. if you say you do everything properly but it still messes up then the fact is that apperently not everything is that good as you think it is but it sounds like some context problem.

I also suggest you use mesa3d. It is reliable, faster, better quality and also provides advanced features (like shaders etc.)

i seem to recall that if you don’t have opengl drivers installed
opengl 1.1 will get wrapped by microsoft to direct 3D, and thus get accelerated.

really? I have no recollection of that, dukey.

Vista comes with an OpenGL 1.4 to D3D wrapper.

Does anyone know what turning off the graphics acceleration using Display Panel does?

How is it different from what I do on per-application instance basis?

Thanks!

-SharpT