confused about texture filters

just so I get this right:

the following table is {MIN_FILTER,MAG_FILTER} as passed to glTexParameter.

// these mean no filtering:
{GL_LINEAR,GL_LINEAR}
{GL_LINEAR,GL_NEAREST}
// these mean bilinear filtering:
{GL_NEAREST_MIPMAP_NEAREST,GL_NEAREST}
{GL_NEAREST_MIPMAP_LINEAR,GL_NEAREST}
// these mean trilinear filtering:
{GL_LINEAR_MIPMAP_NEAREST,GL_LINEAR}
{GL_LINEAR_MIPMAP_LINEAR,GL_LINEAR}

Is this correct?

Finally, if I use the texture_filter_anisotropic extension, does the min/mag filters matter at all?Is it common to have anisotropic enabled but without bi/trilinear filtering?

GL_NEAREST - no filtering at all
GL_NEAREST_MIPMAP_NEAREST - no filtering, but mipmap selection
GL_NEAREST_MIPMAP_LINEAR - linear filtering between mipmap levels, but no filtering inside each mipmap level
GL_LINEAR - bilinear filtering
GL_LINEAR_MIPMAP_NEAREST - bilinear filtering, with mipmap selection
GL_LINEAR_MIPMAP_LINEAR - trilinear filtering

For magnification, the MIPMAP settings make no sense as mipmapping only applies to minification. The meaning of NEAREST and LINEAR is the same though.

to hazelwood:
not quite.
unless you choose to use nearest, you’ll always end with some combination of filtering.

min applies when a projected texel is smaller than a pixel, mag when it is bigger so, analyzing each case:

min=GL_LINEAR / mag=GL_LINEAR
when texels being shrinked, you will get aliasing, similar to nearest sampling, but a little better since interpolation evaluates to a better estime of the real pixel color… but all depends on the texture itself.
when texels being enlarged, a linear filter is used, and you won’t see texel as “quads under perspective projection” but colors will shade smoothly.

min=GL_LINEAR / mag=GL_NEAREST
when texel enlarges, you get the quads.

min=GL_NEAREST_MIPMAP_NEAREST / mag=GL_NEAREST
as texel shrinks, the rasterizer access the appropriate mipmap, and takes one sample from it.
you get some aliasing because of the nearest sampling, but thanks to the prefiltered mipmap, it will not be that prominent.
when texel enlarges, you get the quads.

min=GL_NEAREST_MIPMAP_LINEAR / mag=GL_NEAREST
as texel shrinks, the rasterizer lerps between two mipmap levels: this will prevent you from seeing where one mip ends and begins the other.
but still, as you take one sample per mipmap, you get some aliasing.
when texel enlarges… you got it.

min=GL_LINEAR_MIPMAP_NEAREST / mag=GL_LINEAR
texel shrinks, and you get almost no aliasing because of 4 samples per mipmap, but you can see when the rasterizer chooses the nearest mipmaps.
but as texel enlarges, the story is smooth.
this is normally the setup when a game says “triliear disabled” or something.

min=GL_LINEAR_MIPMAP_LINEAR / mag=GL_LINEAR
the real trilinear sampling.
with a total of 8 samples per pixel during minification, almost no aliasing find the way to your display, and texel magnification is smooth.

so, the three “modes” will be:
nearest sampling - at most 1 texel being sampled
min=GL_NEAREST / mag=GL_NEAREST

bilinear sampling - at most 4 samples
min=GL_LINEAR / mag=GL_LINEAR

trilinear - at most 8 samples
min=GL_LINEAR_MIPMAP_LINEAR / mag=GL_LINEAR

…btw, these are the modes i usually employ in my apps, but all depend on what you wish to do.

about anisotropy, quick answers: yes / don’t think so.

long answers:
you know that anisotropic filtering is an attempt to cope with the fact that the image of a pixel in texture space is pratically never a square (isotropic sampling).
instead it resembles a warped trapezoid, one with every side bent someway.
anisotropic sampling approximate this shape with a rectangle, thus leading to potentially more than 8 samples per texel, because if you sample 2 texels on the s direction, you could need to sample 2*n on the t direction.
it all depend on the texel orientation.

if you say you minify in nearest mode, AF won’t come in… because you’re not filtering in the end.

if you say you’re minifying with linear interpolation, AF will kick in trying to approximate the pixel projection in texture space and you will sample the texture at least 4 times, at most 4*n depends on your settings.

by adding mipmaps too, the total samples will be 50% more: the region of interest in the next mipmap level is exactly half the area.

so you understand that if no filtering is in use, AF is useless… since it is a way of filtering.

thanks guys for the good explaination. I haven’tdone OpenGL for years so I need to catch up.

I see the anisotropic extension is available since 2000 or so. Do you think it is ok performance-wise to use it on GeForce2 or similar cards? If I look at Quake3 it doesn’t look like it’s using it at all. Thanks.

Q3 doesn’t use it (but Q3 was 1999, i think).

On a Geforce 2 you can use anistropy, but be careful, use it only on textures, that really benefit from its use, the Gf2 might not like it, if you use it for everything.

On todays cards, you CAN use anistropic filtering for everything, although that’s wasted processing power, since not all kind of textures need it. But still, there will be a noticeable performance hit.

BTW: I found, that on ATI cards 2x anisotropic filtering is the best choice for quality and speed. 4x and up don’t give you that much more quality, but they cost a lot performance.

Jan.

what about nvidia?
is there any papers i could read about this performance issue on ati ?