texture LOD behavior

I’m attempting to use TEXTURE_BASE_LEVEL in our application. However, the texture LOD extension spec doesn’t answer all my questions, and test programs I’ve written have produced inconsistent results.

Here’s how I’d expect it to work: At init time, the application creates/binds a texture object and defines all levels of the mipmap pyramid (say, 0 through 11 for a 1kx1k texture). Then at render time, the application binds the texture objects, sets TEXTURE_BASE_LEVEL to the highest res (lowest level) needed for the current render pass, then renders geometry.

What I’m seeing, however, is if TEXTURE_BASE_LEVEL is set to 4 for the first render pass, the driver appears to “trash” levels 0 through 3. If I set TEXTURE BASE_LEVEL lower than 4 for subsequent rendering, the texture applied is garbage. I’ve seen this behavior on both the ATI Radeon 9700 and the NVIDIA gf4 ti 4600.

Apparently, when I set TEXTURE_BASE_LEVEL to a value lower than it was set previously, I need to respecify that level of the mipmap pyramid with another call to glTexImage2D, even though I’ve already specified that level.

However, this is not completely true. I’ve found that if my -first- render pass is with TEXTURE_BASE_LEVEL set to 0, then I can raise and lower the TEXTURE_BASE_LEVEL parameter in subsequent rendering passes without having to respecify any levels of the pyramid! Hmm. (At least, this is the behavior on the Radeon; I haven’t tried this yet on a gf4.)

So, to recap:

My naive understanding is that the application should not need to keep a copy of the texture at all, the device driver should keep a copy of all levels, and the graphics RAM should only contain the highest level needed based on the TEXTURE_BASE_LEVEL setting. But I’ve not been able to produce a test program that shows this is how it works all the time. So, my understanding of this feature must be flawed.

If anyone can shed some light on this feature for me, I’d appreciate it.

P.S. I’ve also toyed with min/max LOD parameters. Setting TEXTURE_MAX_LOD to anything other than the default value dramatically reduces rendering speed on the Radeon. Supporting the Radeon is a must, so if the only solution is to use TEXTURE_MAX_LOD instead of TEXTURE_BASE_LEVEL, then I’ll have to add device-specific code so that we don’t attempt this at all on the Radeon.