Just curious. Do you find that this yields any edge-on aliasing due to not having all the MIP levels around?
Do you implement blending between MIPs?
I still generate mip maps. Eg. if the Top mip is a grid of 8x1 512x512 size textures, the next mip is a grid of 8x1 256x256 textures. All the way down to 8x1 grid of 1x1 textures. (and I clamp the mip level so that the 4x1, 2x1 and 1x1 texture sizes are not accessed)
Essentially, the hardware still does the filtering between the mip maps, I just clamp the texture coordinates so that there is no filtering between sub-texture edges.
Hav you tried the brilinear cheat?
I am not familiar with this. Can you explain?
Have you implemented any aniso with manual sampling?
And how’s the perf compare stacking up compared to using ordinary MIP textures and GPU trilinear filtering?
I am not that happy with performance- it seems that it just performs on par with a generic 6 texture lookup and blend. (at least at the hardware level that does not support array textures)
I have noticed in my previous post that I floor the mip map level - this essentially removes tri-linear filtering.
Taking this out means that the filtering works, but the half pixel offset value is incorrect - as two mip levels are sampled in tri-linear filtering. Should probably just take the largest half offset of the size in this case.
Ah!, ok. When you said atlas, I presumed you meant non-MIPmapped texture for the atlas – something like iDs Virtual Texturing. And the floor() on the MIP level made it look like you weren’t doing any manual cross-MIP level filtering. Was envisioning this MIP level being used to offset the real texcoords to access MIPs of a particular level.
[quote]Have you tried the brilinear cheat?
I am not familiar with this. Can you explain?[/QUOTE]
You’re not doing your own texture filtering in the shader, so it’s not really applicable. But this is an ancient NVidia trick to save bandwidth when doing trilinear interpolation. Basically, instead of always sampling 2 MIPs and blending between them, you only sample 2 MIPs for like the middle 33% between two MIPmaps and do a fast blend into and out of that region. When you’re less than 33% away from a MIPmap, you just clamp to the result that MIPmap gives you and don’t even sample the other MIP. Results in lower texture filtering quality (blurring/aliasing), but saves bandwidth. See the pictures here: