I wasn’t calling GenerateMipmapEXT before the color texture was bound, so that solved one problem. NVIDIA’s 80 series were a bit more lenient.
I still have to delete and create a new color texture every time I resize the window such that the size of the color texture needs to change. I thought I could specify a new size through TexImage2D, but it’s not working. Is that the expected behaviour?
On a 7800 GTX, binding a depth texture or depth render buffer is disappointingly expensive when used with mipmap generation:
2048x2048xRGBA8xD24 runs at about 15 frames per second.
On an x1800xt performance is decent:
2048x2048xRGBA8xD16 runs at about 50 frames per second. The ATi card crashes if I try to generate mipmaps for non-square textures.
Originally posted by CatAtWork: I thought I could specify a new size through TexImage2D, but it’s not working.
You can, but there’s more to it…After changing the size of the baselevel with TexImage2D, you also have to change the size of all the other mipmap levels. The easiest way to do this is to call glGenerateMipmapEXT after changing the size of the base level with glTexImage2D.
Note that if you have multiple contexts bound to multiple threads then you have to do a BindTexture and/or BindFramebuffer in context B to guarantee that it notices the texture was resized by context A. Or, creating a new texture object instead of resizing an existing texture pushes management of this cross-context complexity entirely into your application code.
I was using this in my texture manager to automatically build mipmaps for static textures, but I forgot to set it to GL_FASTEST after each use. Of course this wasn’t in my test app, so I couldn’t reproduce the behaviour.
This brings performance way up, and gets rid of the bug in which glTexImage2D wasn’t resetting the mipmap chain correctly.