ATI pbuffer slowdown

Hi all;
Im doing a glow fx using pbuffers, and in NVIDIA hw no problems, but in ATI the fps goes down from 60 to 0.05 …im doing this way:

before loading any texture:

  1. Create a pbuffer_blur 256x256
  2. Create a pbuffer_glow 256x256
  3. Initialize the VBOs and other texures
    -loop-
  4. Render everything to pbuffer_glow, with tex env set to rgb*alpha
  5. Bind pbuffer_glow (as texture_2d) and render it with radial blur to pbuffer_blur
  6. Render the scene
  7. Draw a quad over the whole screen with pbuffer_blur

Im clearing only the depth_buffer_bit of the pbuffer_glow (im rendering a black retangle on the viewport to clear it), and the slowdown is on the step 7…using Visual Studio and debugging, pressing ctrl-f10 (to go to next instruction) last about 3 seconds just for the glEnd() command of the quad rendering…is something wrong with the order oc pbuffer creation, or size of them?

hmmm…i find out the problem…im setting the mip level of pbuffers with

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LOD_SGIS,1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_LOD_SGIS,1);

,and when i comment it out, the fps return to 60-70…is there any problem using this extension in ATI?

My guess is that they have not optimized pbuffers for the scenario of doing mip-map generation. So they are probably just downloading the data from the card into main memory and then re-uploading it to do the generation.

Mipmap generation should be fine on ATI hardware. It’s been accelerated for a long time. It seems the problem rather is with clamping the LOD, which I btw wonder what the purpose of that is.

yep normal mipmap generation for pbuffers does work fine on my radeon 9700pro… guess really the clamping is the problem

just another guess: is the pixelformat for the pbuffers the same as for you mainwindow ?

The pixel format is not exactly as the main window one…
int attr_s[] =
{
WGL_SUPPORT_OPENGL_ARB, TRUE, // pbuffer will be used with gl
WGL_DRAW_TO_PBUFFER_ARB, TRUE, // enable render to pbuffer
WGL_BIND_TO_TEXTURE_RGBA_ARB, TRUE, // pbuffer will be used as a texture
WGL_RED_BITS_ARB, 8, // at least 8 bits for RED channel
WGL_GREEN_BITS_ARB, 8, // at least 8 bits for GREEN channel
WGL_BLUE_BITS_ARB, 8, // at least 8 bits for BLUE channel
WGL_ALPHA_BITS_ARB, 8, // at least 8 bits for ALPHA channel
WGL_DOUBLE_BUFFER_ARB, FALSE, // we don’t require double buffering
0 // zero terminates the list
};
i dont requesst for stencil bits and double buffering…
and the texture creation is

int pAttrib[] =
{
WGL_TEXTURE_FORMAT_ARB, // our pbuffer will have texture format RGBA
WGL_TEXTURE_RGBA_ARB,
WGL_TEXTURE_TARGET_ARB, // of texture type GL_TEXTURE_2D
WGL_TEXTURE_2D_ARB,
WGL_MIPMAP_TEXTURE_ARB, TRUE,
0 // zero terminates the list
};


crate, share lists…

glGenTextures(1, &pbuffer_ID);
glBindTexture(GL_TEXTURE_2D, pbuffer_ID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);

can anybody explain me what is wrong with the clamping?