glGenerateMipmapEXT crashes on ATI

Hi folks,

I have a weird crash problem on certain ATI cards. It seems to happen when I call glGenerateMipmapsEXT in the setup of the FBO to establish a set of mipmaps in the buffer.

I don’t have an ATI card myself, but I have got several reports from users of our software about this problem and I have been able to trace the problem to the glGenerateMipmapsEXT call.

Here is the FBO setup code:


 // create objects
 glGenFramebuffersEXT(1, &fb_);        // Frame buffer object
 glGenTextures(1, &handle_);           // Texture handle
 if(z_depth_ > 0)
    glGenRenderbuffersEXT(1, &fb_depth_buf_); // Render buffer for z buffering
 
  // Make frame buffer active
 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb_);
 
 // Initialize texture
 glBindTexture(GL_TEXTURE_2D, handle_);

 // Use CLAMP_TO_EDGE instead of CLAMP to avoid black borders on ATI cards.
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, anisotropy_);

 // Calculate max mipmap level. Crashes on non-square textures if this
 // isn't specified.
 int mm_level = ilog2(std::min(width_, height_));
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, mm_level);

 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width_, height_, 0, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*)NULL);

 // Crashes here on ATI cards!
 glGenerateMipmapEXT(GL_TEXTURE_2D);
 
 // Attach texture to framebuffer color buffer
 glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, handle_, 0);


Hopefully someone can see what I have done wrong here…

Cheers

I simply don’t use this function since it tends to crash on Radeons.
I don’t use it on GeForce either since I would have to make a codepath without this function anyway.

Ooops! I read too quickly, this is not your problem! :slight_smile: Sorry, no more ideas.

<not_relevant>
This function works well at least on my radeon (X1600 one). This happens if you don’t set at least the attached texture minification filtering mode explicitly whatever what you set, mipmapping or not. But generating mipmap levels with glGenerateMipmapsEXT, good sense tells to use mipmapping as minification filtering mode. :slight_smile:
</not_relevant>

That’s of course a possibility :slight_smile:

What do you use instead? GL_GENERATE_MIPMAPS(_SGIS)? Does that work OK on dynamic textures like these?

In this case I do set the minification explicitly though, it’s set to GL_LINEAR_MIPMAP_LINEAR before the call to glGenerateMipmapsEXT

You can also generate mipmap levels yourself and call glTexImage2D for each one. or use the old glu good function: gluBuild2DMipmaps (which implements the 1st idea, but you have link one more library if not already used).

Yes, but those functions downscale image on the CPU.
It’s better to write a 2x2 box downscale shader and render directly to texture.
Keep in mind that you can’t render to texture you’re currently using, so you must implement some sort of ping-pong algorithm, but in the end it’s worth it because it’s a lot faster and more parallel to other CPU operations.

Would be a lot easier if I got the glGenerateMipmapEXT function to work though… I guess that the ATI driver developers have tested this at some point and that it is supposed to work, so there must be something stupid I have done, or maybe some weird state/order of things that causes this.

I read in an older thread that you need to glEnable(GL_TEXTURE_2D) before allocating the buffers. I couldn’t find this in any of the code examples I’ve seen though. Any thoughts on this?

k_szczech, that is true, but IMO it is like “bringing out the big tools” (If I can say… :slight_smile: ) for a task that should not be repetitive and not computionally too much expensive comparing the cost of setting a fbo and shaders to perform that (You would say, once it is done, it is done…)

about the glEnable(GL_TEXTURE_2D) story, don’t know, things are not always logical working with opengl on actual hardware, especially with ATI according to the general mood. It is not about hardware, far from it, more about opengl support.

When I tested, glEnable(GL_TEXTURE_2D) did the trick just like the old thread said but that was for a static texture. I also removed the code and went back to GL_GENERATE_MIPMAPS since it works universaly.

When working with the programmable pipeline glEnable(GL_TEXTURE_2D) shouldn’t be used … This is in the theory … in practice I have to try this trick.

After testing more deeper my Radeon, it turns out that I was mistaken. Actually, It was working articially due to a stupid mistake. Calling at texture initialization:

glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);

works well and is necessary on my radeon to obtain a complete framebuffer with a 2D texture attached to attachment point 0 using mipmapping as minification filtering mode. Then, after rendering to this texture, I have to call only:

glGenerateMipmapEXT(GL_TEXTURE_2D);

to update each mipmap level and it does not crash.

This is a nice example of one company making one function unusable in practice. How long are we aware of this bug? 6 months or so?
I would rather not know what will show up in their upcoming GL3 driver.

glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);

works well and is necessary on my radeon to obtain a complete framebuffer with a 2D texture attached to attachment point 0 using mipmapping as minification filtering mode. Then, after rendering to this texture, I have to call only:

glGenerateMipmapEXT(GL_TEXTURE_2D);

unless you don’t turn GL_GENERATE_MIPMAP off again, we don’t know if glGenerateMipmapEXT() is actually working or if its the automatic mipmap generation that kicked in.

unless you don’t turn GL_GENERATE_MIPMAP off again, we don’t know if glGenerateMipmapEXT() is actually working or if its the automatic mipmap generation that kicked in.

Not sure to understand what you said but, it looks like that on my hardware (Radeon X1600), when I let GL_GENERATE_MIPMAP on, I have to call glGenerateMipmapEXT(GL_TEXTURE_2D) to update all mipmap levels, otherwise they turn black after rendering to the texture with a fbo.
If I let GL_GENERATE_MIPMAP off, the fbo is not complete calling only glGenerateMipmapEXT(GL_TEXTURE_2D) after attaching a texture to this one and using mipmapping as minification filtering mode.

If it was, what you meant skynet, if I do:

glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_FALSE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);

after rendering to a texture and binding this one, all mipmap levels remain black.

This is ridiculous. You should be able to call glGenerateMipmap before you setup the code so it creates a mipmap chain. And after when you need to update them mipmaps… Again just another reason why people go and use DX… :frowning: