Wrong mipmap level when viewed from angle

Doing my first tests with showing a texture (etc2, 512x512) on two triangles that together form a square, based on code from a tutorial.
It works fine when the quad is seen from “above”, but when rotated so that it is seen from an angle such that the texture gets stretched-out in one direction and compressed in the perpendicular direction, the graphics becomes clearly pixelated.
I suspected that the rendering engine used the “wrong” (i.e. one of the lower resolution) mipmaps and indeed it looks better if I modify the code to supply only the fullsized version.
Is there a way to tell opengl to use the smaller mipmaps only when the onscreen quad is small in both dimensions?

Are you familiar with anisotropic texture filtering? Do you have it enabled? To what level?

Have you provided all of the MIPmaps? Are you using LINEAR_MIPMAP_LINEAR min filtering?

What GPU driver are you using? Sometimes drivers provide settings that allow you to tune how LINEAR or LINEAR_MIPMAP_LINEAR filtering is performed, to trade quality for performance.

IIRC, in the absence of anisotropic texture filtering (with straight trilinear filter), the texture axis that requires the lowest-resolution texture will be used to select the appropriate MIPmaps to sample from and interpolate. So you’d expect a blurry texture result the more edge-on you get. Especially if you’re using textures which map to the world with a square aspect ratio.

In the absence of anisotropic texture filtering, you can achieve higher quality edge-on filtering results if you know what direction you’re going to view the texture edge-on from by storing lower resolution in that dimension. The goal being to pre-compute anisotropic views of the texture so that the sampled pixel footprint approximately matches a texel.

If you’re using a fragment shader (rather than fixed-function texturing), you can use textureLod to specify the desired level explicitly.

The 2D overload of the texture function is (very) roughly equivalent to:

vec4 texture(sampler2D texture, vec2 texcoord, float bias)
{
    vec2 texcoordPixels = texcoord * textureSize(texture, 0);
    float scale = max(length(dFdx(texcoordPixels)), length(dFdy(texcoordPixels)));
    return textureLod(texture, texcoord, log2(scale) + bias);
}

If you replace max with min, it will use the dimension with the lowest scale factor rather than the highest. This will typically result in aliasing in the “compressed” direction rather than blurring (with GL_LINEAR_MIPMAP_*) or pixellation (with GL_NEAREST_MIPMAP_*) in the “stretched” direction.

Given that you’re seeing pixellation, I’m guessing that you’re using GL_NEAREST_MIPMAP_*.

If you want to avoid all of those issues for “tangential” textures, consider using anisotropic filtering by calling glTexParameterf with a parameter name of GL_TEXTURE_MAX_ANISOTROPY (values greater than 1.0 enable anisotropic filtering). But this has a cost, as it reads multiple texels from the mipmap level corresponding to the higher scale factor (i.e. the higher-resolution mipmap level) then averages them in the direction corresponding to the lower scale factor to avoid aliasing.

Thank you, Dark_Photon. Adding this one line

glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16.0);

made the texture look clearly better from the side. I used AMD’s compressonator to generate the compressed texture including mipmaps, but it does not appear to have an option for the fancy mipmaps in your link that are reduced in size in one axis only. Apparently the anisotropy filter can do its job anway.

Sure thing. Glad that worked for you.

Yes. The precomputed anisotropic textures was a solution used before GPUs had built-in anisotropic filtering. There’s a cost for the latter. But if it’s acceptable in your use case, there’s not much point in using precomputed anisotropic textures. The GPU built-in solution automatically supports different viewing directions, whereas the precomputed solution bakes one in.