Radeon vs. nVidia


I have created an MFC in Visual C++ application with the main window containing a splitter window and four views. Each view creates its own OpenGL context. On Radeon cards the program works very well and runs smoothly, however on nVidia cards it seems to run very slow. I have tried using the latest nVidia cards and the older ones and the problem occurs on all of them.

For experimental reasons I tried removing the PFD_DOUBLEBUFFER flag from the pixel format structure and the speed difference was amazing and ran a lot faster, however very flickery. I understand that double-buffering slows the rendering process down, but I didn’t think it slowed the program down that much.

I don’t think that the problem is being caused due to this though. Here is my OpenGL construction code:

BOOL CMyView::SetupPixelFormat()
sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd
1, // version number
PFD_DRAW_TO_WINDOW | // support window
PFD_SUPPORT_OPENGL | // support OpenGL
PFD_DOUBLEBUFFER, // double buffered
24, // 24-bit color depth
0, 0, 0, 0, 0, 0, // color bits ignored
0, // no alpha buffer
0, // shift bit ignored
0, // no accumulation buffer
0, 0, 0, 0, // accumulation bits ignored
16, // 16-bit z-buffer
0, // no stencil buffer
0, // no auxiliary buffer
PFD_MAIN_PLANE, // main layer
0, // reserved
0, 0, 0 // layer masks ignored

int m_nPixelFormat = ::ChoosePixelFormat(m_pDC->GetSafeHdc(), &pfd);

if(m_nPixelFormat == 0) return FALSE;

return ::SetPixelFormat(m_pDC->GetSafeHdc(), m_nPixelFormat, &pfd);


BOOL CMyView::InitOpenGL()
// get the views client dc
m_pDC = new CClientDC(this);

// if the dc is invalid then return FALSE
if(m_pDC == NULL) return FALSE;

// setup the views pixel format
if(!SetupPixelFormat()) return FALSE;

// create the rendering context
m_hRC = ::wglCreateContext(m_pDC->GetSafeHdc());

// if the rendering context could not be created then return FALSE
if(m_hRC == 0) return FALSE;

// make the rc current
if(::wglMakeCurrent(m_pDC->GetSafeHdc(), m_hRC) == FALSE)
	return FALSE;
return TRUE;


And the OnDraw function:

void CMyView::OnDraw(CDC* /pDC/)
SetContext(); // set the render context
RenderScene(); // render this scene
SwapGLBuffers(); // swap OpenGL’s buffers

The program doesn’t need to render much for it to start running slow, just one cube.

A while back I wrote a program in C++ which did principally the same thing but using a single MFC view and then using the glViewport() function to create the four viewports. Looking back at that program it ran just as fast on nVidia as on Radeon cards. The only real difference is that the new program has four views as opposed to just one.

Any help would be greatly appreciated!

Best Regards,
Lea Hayes

How much does it slow down? FPS???

It could be that you have shared back buffer enabled. Some people have mentioned this beeing a problem.

Also having multiple windows causes a slow down on both, because of the context switch. That’s just the way it is and always has been on cheap cards. NV’s Quadro should work fine in this area.

Probably best to have a split view.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.