ATI multisample problems

Hey hey!

I’m having two problems with the Radeon 9500 and latest Catalyst drivers(6.4)

  1. It appears I can’t disable FSAA once is enabled. The glDisable(GL_MULTISAMPLE_ARB) seems to be broken. I do this to create bitmap font:
HFONT l_hFont = CreateFont
     -11, // height of font
      0, // average character width(0 to adjust automatically with the height)
			0,								// angle of escapement
			0,								// base-line orientation angle
			FW_THIN,						// font weight
			FALSE,							// italic attribute option
			FALSE,							// underline attribute option
			FALSE,							// strikeout attribute option
			DEFAULT_CHARSET,				// character set identifier
			OUT_TT_PRECIS,					// output precision
			CLIP_DEFAULT_PRECIS,			// clipping precision
			NONANTIALIASED_QUALITY,			// output quality
			FF_DONTCARE|DEFAULT_PITCH,		// pitch and family
			L"Arial"						// typeface name

		// make the system font the device context's selected font 
		SelectObject (m_hDC,l_hFont); 

		m_uiTextListBase = glGenLists(255);
		wglUseFontBitmaps (m_hDC, 0, 255, m_uiTextListBase); 

And this to draw a text:

        //Disable FSAA

	//Set otho projection

	//Disable depth test and zwrite

	//Fill mode

	//Draw each text
	size_t i;
	for ( i=0; i<m_sDrawSet.texts.size(); ++i )
		const sDRAWPETITION_TEXT& t = m_sDrawSet.texts[i];

		glColor4f ( t.color.x, t.color.y, t.color.z, t.opacity );
		// now draw the characters in a string 
		glCallLists ( (GLsizei)t.text.length(), GL_UNSIGNED_BYTE, Utils::UNICODE2ASCII(t.text).c_str() );

	//Restore list base

And I get blurred-by-fsaa antialiasing text…
If I dont use the GL_multisample_arb extension and don’t create a fsaa framebuffer text is rendered cleanly and ok…

  1. I’m making vertex picking using a glReadPixels() call to compare the vertex.z with the z-buffer to test if that vertex is occluded. Using a multisapled framebuffer it appears I can’t get the z-buffer because glReadPixels() fails. I know I could do this with occlusion queries but can’t understand why glReadPixels() fails

With a GeForce 6800 text is drawn ok with multisample AA and glReadPixels() doesn’t fail.


I asked a similar question about 4 years ago here:;f=3;t=005753#000000

Having said that, true multi-sample FSAA shouldn’t blur your text as long as the text uses textures. However, super-sample FSAA will blur your text, as will Qunicunx FSAA. (My question was about Quincunx FSAA.)

I’m pretty sure that the answer remains the same: you cannot have FSAA enabled for part of your drawing of a frame and disabled for another part. The FSAA happens at swap buffers time, so changing the FSAA state mid-frame doesn’t work.

Actually, you can disable multisample rasterization mid-frame. I know this works in D3D at least, I used that in a couple of the SDK samples to boost performance slightly. That’s useful for instance for the skybox.

Not sure what “fails” means here, but using glReadPixels() on a multisample depth buffer is going to push you through a software path.

Yep yep, it appears some graphics cards don’t allow to use dynamic FSAA… Once is enabled you can’t disable it. I’m using multisampling. The GeForce6 and Radeon X1300 work ok, my old Radeon9500 appears to ignore the glDisable(GL_MULTISAMPLE_ARB) thing when painting the text. Probably, as you mentioned, the FSAA is performed in the Swap() for the entire framebuffer… but then is a clear bad-implementation of the GL_multisample_arb extension, because says very clear you must be able to use glDisable() to disable the multisampling.

Software mode is ok for me Humus. I just do a glReadPixels() to compare some z values for vertex picking and the operation just takes half second when the user release a button, so is no problem. What I don’t understand is why the glReadPixels() just returns thrash data. This is what I do for vertex picking(without using occlusion queries):

glReadPixels(0,0,m_uiWidth,m_uiHeight,GL_DEPTH_COMPONENT,GL_FLOAT,&m_stlDepthBuffer[0]) where m_stlDepthBuffer is a std::vector<float> and m_uiWidth/m_uiHeight is the viewport size

I project the world-space vertices into screen using the gluProject(wsPos.x,wsPos.y,wsPos.z,modelview,proj,view,&x,&y,&z);

Then I compare all the vertices projected into the screen with a 2D rectange that the user marked using the mouse. Vertices are selected if they are inside the 2D screen rectangle and if their


I clear zbuffer with 1.0f ofc.

for ( v=0; v<mesh->Positions.size(); ++v )
    wsPos = mesh->Positions[v];

    y = m_uiHeight-1-y;//2d rect uses top-left origin

    if ( (l_vB.x-POINT_SIZE < m_iPickX &&    l_vB.x+POINT_SIZE < m_iPickX) &#0124;&#0124;
			 (l_vB.x-POINT_SIZE > m_iPickX+m_iPickWidth && l_vB.x+POINT_SIZE > m_iPickX+m_iPickWidth) &#0124;&#0124;

			 (l_vB.y-POINT_SIZE < m_iPickY && l_vB.y+POINT_SIZE < m_iPickY) &#0124;&#0124;
			 (l_vB.y-POINT_SIZE > m_iPickY+m_iPickHeight && l_vB.y+POINT_SIZE > m_iPickY+m_iPickHeight) )

m_iPickX and m_iPickY is the origin of the 2D selection rectangle in screen.

m_iPickHeight and m_iPickWidth are the 2D rectange size in pixels.

POINT_SIZE is the point radio ( I use BIG dots to mark vertices )

The code works ok with GeForce and RadX1000, fails on Radeon 9500 with multisample enabled because the glReadPixels just return thrash values. Perhaps it can’t return z-buffer floating point data because uses INT one and can’t convert to GL_FLOAT??

Well, I will use occlusion queries if I detect the glReadPixels() fails but…

Thx for the answers.