It’s the bias applied to the level of detail (LoD).
The texture lookup functions which use (implicit or explicit) partial derivatives (
textureGrad) calculate a scale factor ρ as
ρ = max(|∂P/∂x|, |∂P/∂y|)
If a texture is mapped so as to be displayed “at scale” (one-to-one correspondence between texels and screen pixels), the scale factor is 1.0. If the texture was mapped at half size, the scale factor would be 2.0, if it was at double size it would be 0.5. IOW, the scale factor is the distance in texels between the centres of adjacent screen pixels. If the x and y scale factors differ, the larger value is used.
The level of detail (LoD) is calculated as λ = log2(ρ). The
textureLod function takes the level of detail as an argument rather than calculating it.
A value of 0 results in mipmap level 0 (the base level) being sampled, a value of 1 samples level 1, a value of 0.5 uses a 50-50 mix of levels 0 and 1 (assuming
GL_LINEAR_MIPMAP_LINEAR filtering). More generally, the
_MIPMAP_LINEAR filters use the levels corresponding to
ceil(λ), interpolating according to
fract(λ), while the
_MIPMAP_NEAREST filters simply round λ to the nearest integer to select a single mipmap level.
The bias is a value added to λ (either calculated or passed as an argument to
textureLod) before the selection of levels and interpolation factor. The net result is to make the effective scale factor higher or lower than the actual scale factor. A bias of -1.4 would cause the base level (exclusively) to sampled for any λ up to 1.4. A λ value of 1.5 with a -1.4 bias would be equivalent to 0.1 with no bias, i.e. a blend of 90% level 0, 10% level 1.
A negative bias will typically result in aliasing (e.g. moiré patterns), while a positive bias will result in blurring.
Also: certain texture lookup functions (
textureProj, and the
*Offset versions of those) accept a
bias parameter which is added to the LoD calculated from implicit derivatives, in addition to any bias set by