FBO mipmap generation

With an FBO-bound color texture:

glGenerateMipmapEXT seems to be broken on both ATI and NVIDIA’s latest drivers (6.6 and the 90 series, respectively.)

ATI’s drivers corrupt all the mipmap levels, at least on the two X1800XTs here.

NVIDIA’s drivers seem to do the work of building the mipmaps, but I can’t see the results with texture2DLod in GLSL, or TEXTURE_LOD_BIAS.

I’m only using mipmap generation in one project:

Worked fine on GeForce 6600GT. Now I’m using 7800GT with Forceware 91.28 Beta - still works fine.
I’m using texture2D in GLSL shaders with 3rd parameter.

On Radeon x850 I’m getting exception when calling glGenerateMipmapEXT. Float16 filtering is not supported on these, but exception is just a bit too expressive way of saying ‘unsupported’. :slight_smile:

I wasn’t calling GenerateMipmapEXT before the color texture was bound, so that solved one problem. NVIDIA’s 80 series were a bit more lenient.

I still have to delete and create a new color texture every time I resize the window such that the size of the color texture needs to change. I thought I could specify a new size through TexImage2D, but it’s not working. Is that the expected behaviour?

On a 7800 GTX, binding a depth texture or depth render buffer is disappointingly expensive when used with mipmap generation:
2048x2048xRGBA8xD24 runs at about 15 frames per second.

On an x1800xt performance is decent:
2048x2048xRGBA8xD16 runs at about 50 frames per second. The ATi card crashes if I try to generate mipmaps for non-square textures.

Originally posted by CatAtWork:
I thought I could specify a new size through TexImage2D, but it’s not working.
You can, but there’s more to it…After changing the size of the baselevel with TexImage2D, you also have to change the size of all the other mipmap levels. The easiest way to do this is to call glGenerateMipmapEXT after changing the size of the base level with glTexImage2D.

Note that if you have multiple contexts bound to multiple threads then you have to do a BindTexture and/or BindFramebuffer in context B to guarantee that it notices the texture was resized by context A. Or, creating a new texture object instead of resizing an existing texture pushes management of this cross-context complexity entirely into your application code.

I finally tracked down what was causing the performance drop. What a debugging nightmare.


I was using this in my texture manager to automatically build mipmaps for static textures, but I forgot to set it to GL_FASTEST after each use. Of course this wasn’t in my test app, so I couldn’t reproduce the behaviour.

This brings performance way up, and gets rid of the bug in which glTexImage2D wasn’t resetting the mipmap chain correctly.