Separate filtering by texcoords

For example, i have texture3D, and i want make GL_LINEAR_MIPMAP_LINEAR for S,T texcoords, and GL_LINEAR_MIPMAP_NEAREST, or GL_NEAREST_MIPMAP_LINEAR for R texcoord.
But it’s not allow now…
It will be very usefull thing.

Originally posted by CybeRUS:
For example, i have texture3D, and i want make GL_LINEAR_MIPMAP_LINEAR for S,T texcoords, and GL_LINEAR_MIPMAP_NEAREST, or GL_NEAREST_MIPMAP_LINEAR for R texcoord.
But it’s not allow now…
It will be very usefull thing.

I’m not a hardware person, but I think there may be a severe limit imposed by the hardware here. Each combination of s,t,r filters would potentially need its own logic and the number of combinations is high (9, 16, 25…)

I agree, though, that it would be very useful to have cheaper 3D texturing, where the goal is to use the 3rd coordinate to pick independent slices from a stacked texture. This could reduce texture bind changes for similarly sized/formatted textures without losing mipmapping and texture wrap modes (a common problem with texture “pages”).

For this, at least, I’d encourage the hardware folks to at least consider a special ST_MIPMAP_LINEAR_R_NEAREST kind of mode, potentially without any filtering on the R coord. Maybe call it 2.5D texturing.

Is there another way to do this currently?

Avi
www.realityprime.com

Even though I think it would be technically possible to sample e.g. linearly the S coord and nearest to R coord, it has no meaning that one mipmap is taken simultaneously linearly for S and nearest for R

Originally posted by vincoof:
Even though I think it would be technically possible to sample e.g. linearly the S coord and nearest to R coord, it has no meaning that one mipmap is taken simultaneously linearly for S and nearest for R

Were you responding to the original post, or the more limited case of 2.5D textures?

If the latter, I’m not sure what the right token would be, but the implication is this sort of 3D mipmap is really just a stack of 2D mipmaps. In other words, mip-generation is only done in two dimensions and the third remains full-height across all mip-levels with no down-sampling. The third coordinate chooses a 2D slice (one or two mip levels of that slice, depending on the mode) and from then on, it’s standard 2D mipmapping. This is not the current 3D mipmapping, which is why I was calling it 2.5D.

This, by the way, would be potentially useful for handling virtual textures without stepping on the SGI hardware clipmap patents…

Avi
www.realityprime.com

I was replying to the former post. Sorry for not having been more specific.

The 2.5D texturing is another problem in its own, just like the 3.5D texturing.
Even though 2.5D and 3D look like similar (both roughly use slices of 2D textures) they are really different on the hardware point of view, in practice. Otherwise 3D texturing would have been hardware-accelerated sooner, even if it meant to disable mipmaps.

Originally posted by vincoof:
The 2.5D texturing is another problem in its own, just like the 3.5D texturing.
Even though 2.5D and 3D look like similar (both roughly use slices of 2D textures) they are really different on the hardware point of view, in practice. Otherwise 3D texturing would have been hardware-accelerated sooner, even if it meant to disable mipmaps.[/b]

I can understand why 2.5D and 3D texturing would be very different under the hood, but all the differences I’m aware of would tend to make 2.5D much easier to implement, especially if it’s made clear that changes to the third coordinate, in this case, are slow or are even disallowed within primitives. If the third coord doesn’t change within primitives, 2.5D might even be emulated as a series of 2D texbinds, using the third coordinate as an index into a texture list…

Since the main goal is to be able to bind many similarly formatted textures at once (to avoid texbinds, increase primitive batch sizes, etc…) that restriction might be reasonable.

(…)especially if it’s made clear that changes to the third coordinate, in this case, are slow or are even disallowed within primitives. If the third coord doesn’t change within primitives, 2.5D might even be emulated as a series of 2D texbinds, using the third coordinate as an index into a texture list

Actually it is possible thanks to mipmapping with 2D texturing, and forcing texture min-max mimapping to the desired “slice”, actually called “level-of-detail” (I don’t remember the extension allowing that unfortunately, but it’s definately supported on a wide range of graphics card).

But I don’t really like the fact that it’s not possible to wswitch from a slice to another, like 3D texturing would allow. Just my 2c though.

Originally posted by vincoof:
Actually it is possible thanks to mipmapping with 2D texturing, and forcing texture min-max mimapping to the desired “slice”, actually called “level-of-detail” (I don’t remember the extension allowing that unfortunately, but it’s definately supported on a wide range of graphics card).

Yeah. It’s probably cheaper than a texbind, but has the problems of LODs being different powers of 2, effectively disabling mipmapping (while still incurring the cost), and requiring an API call within a render block to change slices.

At least with the old “texture page” approach, the 2D texcoords can switch between textures within a rendering batch, so that’s still probably better overall. My goal is to have the functionality of texture pages with wrapping and mipmapping for each sub-texture.

I’ll probably explore the shader approach to see if I can get sub-texture-repeat and some sort of mipmapping working for texture pages.

Originally posted by Cyranose:
Yeah. It’s probably cheaper than a texbind, but has the problems of LODs being different powers of 2, effectively disabling mipmapping (while still incurring the cost), and requiring an API call within a render block to change slices.

Fortunately, I think that texture can be specified with any size as long as the level-of-details are configured to allow only one slice.
I can be wrong though, better check the spec.

Originally posted by Cyranose:
I’ll probably explore the shader approach to see if I can get sub-texture-repeat and some sort of mipmapping working for texture pages.

Unfortunately fragment programs don’t allow to force derivatives (they say it will be available in a future extension). At best you will be able to bias the level-of-detail.
I don’t know about fragment shaders though. Fragment programs and fragment shaders do differ a bit, so maybe it’s in fragment shaders.