Weird rendering problem in certain computers

I am doing a project on converting an iges file to stl format, and while testing the program at my teacher’s computer I noticed a strange problem that only occur on certain computers. I’m adding pictures for comparison.
The way it should be:

The way it appears on some computers:

What I’m working on is: Visual Studio 2005, MFC Libraries, and the glu 1.3 dll from here (i need this for the nurbs tessellation functions). I first noticed this at my teacher’s computer, which doesn’t have Visual 2005, but I used a release version with the MFC libraries embedded, and I made sure the glu32.dll v1.3 was in the same directory (otherwise it wouldn’t even run). I then tried in another friend’s computer and had the same problem. He does have Visual 2005 installed, so that can’t be a factor. Lastly, I tried it at my brother’s computer, and there it doesn’t have any problems! (in my computer it’s also ok by the way). My brother suggested that since we have newer video cards (I have an nvidia 7600, and he has a 1-year old ATI card, can’t remember the model) that it could be related to the rendering problem (the other computers have Nvidia 4 models).
In case that it might be relevant, this is what I have in my CView::OnDraw function:

void CIgesView::OnDraw(CDC* /*pDC*/)
	CIgesDoc* pDoc = GetDocument();
	if (!pDoc)

	// Clear out the color & depth buffers
	// Tell OpenGL to flush its pipeline
	 // Now Swap the buffers
    ::SwapBuffers( m_pDC->GetSafeHdc() );


Please, any hint in the right direction will be deeply and thoroughly appreciated!!

I am almost certain that second image is caused by 16 bit zbuffer only (also known as depth-buffer). The first image must have 24 bits zbuffer.

I am not sure how to request a 24 bits depth buffer on glut, but anyway, you should push farther your near clipping plane, and bring nearer your far clipping plane, to gain zbuffer precision.

Sometimes, the desktop color depth controls the depth buffer precision : be sure to set it to 24 or 32 bits before running your program. You may also try to request a stencil buffer, often depth and stencil are tied in a 24+8 bits combination.
It is possible with glut to query how many bits your really got for each buffer, after window creation and glut initialization.

edit: I just this link, it can be of interest, if the problem is with an Intel integrated graphic card :

Now that you mention it, it certainly looks like an issue with the z-buffer.
I am not using glut, only gl and glu. I’m rendering through a doc/view scheme in MFC, so everything is done through the view class messaging system. I do have something related to depth buffer in the pixel format descriptor:

BOOL CIgesView::SetupPixelFormat()
		sizeof(PIXELFORMATDESCRIPTOR),    // size of this pfd
		1,                                // version number
		PFD_DRAW_TO_WINDOW |			  // support window 
	    PFD_SUPPORT_OPENGL |			  // support OpenGL 
		PFD_DOUBLEBUFFER,                 // double buffered
		PFD_TYPE_RGBA,                    // RGBA type
		24,                               // 24-bit color depth
		0, 0, 0, 0, 0, 0,                 // color bits ignored
        0,                                // no alpha buffer
        0,                                // shift bit ignored
        0,                                // no accumulation buffer
        0, 0, 0, 0,                       // accumulation bits ignored
        16,                               // 16-bit z-buffer
        0,                                // no stencil buffer
        0,                                // no auxiliary buffer
        PFD_MAIN_PLANE,                   // main layer
        0,                                // reserved
        0, 0, 0                           // layer masks ignored
	int m_nPixelFormat = ::ChoosePixelFormat(m_pDC->GetSafeHdc(), &pfd);

    if ( m_nPixelFormat == 0 )
        return FALSE;

	if ( ::SetPixelFormat(m_pDC->GetSafeHdc(), m_nPixelFormat, &pfd) == FALSE)
        return FALSE;

	return TRUE;

You can see that, in fact, I have set the zbuffer as 16 bit. But that doesn’t give any problems in my computer, or my brother’s. And according to this page, the video card that my friend has (the one that does have the problem), has “Integrated 32-bit color and 32-bit Z/Stencil Buffer”.
Just a random idea: could it also be related to having blending enabled?

[EDIT] After some googling, it seems that it really is the z buffer. I found out about the “Z-Fighting” phenomenon. And my near and far plane ARE really far away:
gluPerspective(45.0f, aspect_ratio, 0.01f, 2000.0f);
What I don’t understand is why I don’t have any problems. I’ll ask my friend to try with different numbers and post if it works

The number of bits you request is a minimum, so your card will give you 24 when you ask for 16 anyway. But that is not the case of all cards :slight_smile:
Put 24 and problem solved.

In my opinion, z fighting in second image is much too big even for 16-bit depth buffer. That box is really not THAT thin.

So, you may keep 16 bit depth buffer (but 24 is actually the standard nowadays).
Your problem is most likely caused by wrong near/far clipping plane settings as mentioned by ZbuffeR. So check your glFrustum / gluPerspective call and use less drastic zNear/zFar.

It was the zNear clipping plane. Problem solved!
I set it at 5 instead of 0.01 and now it works perfectly. You learn something new everyday… Thanks a lot people!!!