How does OpenGL determine which mipmap to use?

Title. How does the texture go like, “I am going to use this mipmap?” Does the texture have information about how big the sample space is? Or is it determined using the current depth value or something else?


It’s determined by the difference between texture coordinates for adjacent fragments.

If the texture coordinates are T, the base level-of-detail (before any bias settings are added) is

log2(max(length(dFdx(T)), length(dFdy(T)))

or an approximation to it; e.g. length() may use |x|+|y|+|z| or max(|x|,|y|,|z|) rather than √(x2+y2+z2).

That’s assuming that the shader uses texture rather than e.g. textureGrad or textureLod.

With the fixed-function pipeline, the scale factor is a projective function of window coordinates (although not necessarily determined by depth if the texture coordinates at the vertices don’t all have the same q coordinate). With shaders, texture coordinates are computed per fragment, and the scale factor can vary arbitrarily e.g. due to the combination of normal maps and environment maps.

1 Like

Also, that’s for the simple trilinear case. Enabling anisotropic texture filtering makes this more complicated (GPU takes multiple samples across the projected area to generate a better integral).

1 Like