Mipmapping GL_TEXTURE_LOD_BIAS

I don’t understand what values mean to GL_TEXTURE_LOD_BIAS when passed to glTexParameterf:

glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_LOD_BIAS, -1.4f);

I read that the value is clamped to an implementation-defined range that can be queried with GL_MAX_TEXTURE_LOD_BIAS.

What I’m trying to do is to make mipmapping less noticeable, but I want the textures to look the same on any implementation of OpenGL. On the implementation of my laptop the max range is 15. I tried a few values and -1.4 is perfect for me (for this implementation at least).

My questions are:

If the max range on a different implementation would be different (say maybe 5 instead of 15), then the value 1.4 would mean something else for that implementation and would be a stronger bias?

What does that bias value mean? Because it is clamped to a implementation-defined range.

I don’t want the textures to have a stronger bias on different implementations.

Did I make myself clear?

No.

It’s the bias applied to the level of detail (LoD).

The texture lookup functions which use (implicit or explicit) partial derivatives (texture, textureGrad) calculate a scale factor ρ as

ρ = max(|∂P/∂x|, |∂P/∂y|)

If a texture is mapped so as to be displayed “at scale” (one-to-one correspondence between texels and screen pixels), the scale factor is 1.0. If the texture was mapped at half size, the scale factor would be 2.0, if it was at double size it would be 0.5. IOW, the scale factor is the distance in texels between the centres of adjacent screen pixels. If the x and y scale factors differ, the larger value is used.

The level of detail (LoD) is calculated as λ = log2(ρ). The textureLod function takes the level of detail as an argument rather than calculating it.

A value of 0 results in mipmap level 0 (the base level) being sampled, a value of 1 samples level 1, a value of 0.5 uses a 50-50 mix of levels 0 and 1 (assuming GL_LINEAR_MIPMAP_LINEAR filtering). More generally, the _MIPMAP_LINEAR filters use the levels corresponding to floor(λ) and ceil(λ), interpolating according to fract(λ), while the _MIPMAP_NEAREST filters simply round λ to the nearest integer to select a single mipmap level.

The bias is a value added to λ (either calculated or passed as an argument to textureLod) before the selection of levels and interpolation factor. The net result is to make the effective scale factor higher or lower than the actual scale factor. A bias of -1.4 would cause the base level (exclusively) to sampled for any λ up to 1.4. A λ value of 1.5 with a -1.4 bias would be equivalent to 0.1 with no bias, i.e. a blend of 90% level 0, 10% level 1.

A negative bias will typically result in aliasing (e.g. moiré patterns), while a positive bias will result in blurring.

Also: certain texture lookup functions (texture, textureProj, and the *Offset versions of those) accept a bias parameter which is added to the LoD calculated from implicit derivatives, in addition to any bias set by glTexParameter.

Thank you! So, the textures will look the same on any implementation when I pass 1.4, right?

They should. The standard says that the minimum supported value for GL_MAX_TEXTURE_LOD_BIAS must be at least 2, i.e. values between -2 and 2 should be supported by all implementations.

That assumes that no bias is being specified in the shader. The limit is for the total bias from both GL_TEXTURE_LOD_BIAS and the shader.

To be honest, I’d be surprised if any modern (3+) implementations actually limit the bias, as you can achieve the same effect by simply using textureLod or by scaling the derivatives passed to textureGrad. I suspect that the limit is a holdover from 2.x, when textureLod could only be used in the vertex shader and the fragment shader’s mipmap level selection was basically hard-wired.

No. Even on a specific GPU and GPU driver version, it’s going to depend on your driver’s config too.

Why?:

In this, there’s room for considerable driver voodoo.

And even that’s assuming you don’t have anisotropic filtering enabled (in the app and/or forced in the driver config).

In the GPU driver’s implementation, Is this truly a LINEAR blend between MIPs? Or is this a smoothstep? Does it blend over the entire range between MIP levels … or hmmm… maybe just the middle 33% of it?

You see, GPU realtime rendering is all about cheats. That is, what’s “good enough”. Even with GL_LINEAR_MIPMAP_LINEAR filtering enabled (and MIPs created), GPU vendors can make their GPUs perform faster with trilinear filtering if they “clamp” to the nearest MIP level when close one, only sampling from that MIP level. Then in the middle of the range they can do a “fast blend” to try and cover for the fact that they were clamping to the nearest MIP when close to one rather than starting/finishing a slower, smoother blend.

17 years ago, this went by the name brilinear. Saves GPU memory bandwidth, and speeds-up tri-linear texture sampling. Do you take a visual quality hit for this, sure. Is it typically objectionable? Not usually. But it can be in some cases.

Some related settings you might check your GL driver’s config override list for:

  • Anisotropic filtering
  • Texture filtering - Anisotropic sample optimization
  • Texture filtering - Negative LOD bias
  • Texture filtering - Quality
  • Texture filtering - Trilinear optimization

This may have changed, but many years ago IIRC I determined that Texture filtering - Quality = Quality made use of the brilinear cheat, but Texture filtering - Quality = High Quality appeared to disable it. However, that may be totally different now.