Texture wrapping in shader + mipmapping

I’ve been hitting a simple (but what seems to be an un-solvable) problem for the past few days.

I’m trying to implement texture wrapping in a pixel shader.

Normally, it should be as easy as simply doing a “fract(uv)” when sampling the texture, but it doesn’t seem to work well when mipmapping is enabled.

I get a 1-pixel seam between the “tiles”. The width of this seam is always 1 pixel, whatever the distance the camera is to the surface of the triangle, mignified or magnified, doesn’t matter.

When only using linear filtering, there is no seam.

Same behavior on Nvidia or ATI cards.

A picture of the problem:
http://www.infinity-universe.com/opengl/wrapping1.jpg

The scene is a simple planar square with UVs going from 0 to 1. The texture is nothing special, and tiles perfectly well of course.

The shader:


gl_FragColor = texture2D(diffuseTex, fract(gl_TexCoord[0].xy * 16.0));

In the pic above, the texture is magnified (very close to the surface), so shouldn’t it theorically only sample mipmap #0 in the mipmap chain ?

But when I offset the LOD by -10 for example in the texture sampler call:


gl_FragColor = texture2D(diffuseTex, fract(gl_TexCoord[0].xy * 16.0), -10);

… the seam disapears. Same when I force the min/max LOD levels to be 0.

Does anybody have any idea to remove this seam ?

Y.

fract() creates a discontinuity in the derivative of the texture coordinate used to select the mipmap level. You are not getting level 0. Why are you using fract() instead of GL_REPEAT as the wrap attribute of the texture?

I suspected something like that, but that doesn’t tell me how to remove the seam :slight_smile:

I’m using fract() because I eventually want to use this code to select a sub-texture within a texture pack, but I need this sub-texture to be wrapping/tiling.

Y.

You could emulate GL_MIRRORED_REPEAT in the shader. It can be done with a much smaller derivative discontinuity.

Thank you. So the idea is to calculate the LOD level myself and use it into a texture2DLod call. Are there any simple formulas to “emulate” the LOD level calculation in a shader ?

In this NVidia paper ( ftp://download.nvidia.com/developer/Papers/2004/Vertex_Textures/Vertex_Textures.pdf ), there’s a section about mipmapping lod level calculation for texture fetching in a vertex shader. They propose to use this:


Out.HPOS = mul( ModelViewProj, vPos );
float mipLevel = ( Out.HPOS.z / Out.HPOS.w ) * maxMipLevels;

… which I just tested, but gives extremely unbalanced/ugly results.

Y.

On ATI cards you can use the GL_ATI_shader_texture_lod extension. This extension allows you to sample textures using explicit derivative calculated by the dFdx/dFdy functions. If you use the original coordinates to calculate the derivatives which you then use to sample using the modified coordinates, the correct mipmap should be selected.

You can look at the paper named “Level of Detail Selection in GLSL” within the ATI SDK for more info.

Although I don’t have a solution, it seems as if the problem only happens on “odd” pixels. Meaning even pixel pairs all compute correctly. Which hints at something using dFdx, given that dFdx is only well defined for even pixel pairs?

However, I don’t see how mipmaping is going to work with manually tiled regions in a texture anyway (unless you use nearest filtering for the lookup).

Progress…


float MipmapLevel(vec2 uv, vec2 textureSize)
{
    vec2 dx = dFdx(uv * textureSize.x);
    vec2 dy = dFdy(uv * textureSize.y);
    float d = max( dot(dx, dx), dot(dy, dy) );
    return log2( sqrt(d) );
}

… seems to be working pretty well to select the lod level.

Now I’m wondering if what I’m doing is standard. In the specs, it is written that texture2DLod only works in the vertex shader. However I’m using it fine on NVidia cards in the pixel shader. Is this a typo, can I safely rely on this behavior even on ATI cards ?

Let’s say that it doesn’t work on ATI cards. Could I use this solution on NVidia and GL_ATI_shader_texture_lod on ATI ? Is GL_ATI_shader_texture_lod supported on all cards also supporting GLSL (ie. pixel shaders 2.0+) ?

Y.

Hi Ysaneya,

I’ve only used an ATI card for about 2 months before going back to NVidia so I can’t help you there, but here’s a small performance tip:

log2(sqrt(d)) == 0.5f*log2(d)

Cheers,
N.

There’s texture2DGrad, which allows you to specify gradients and is supported in fragment shaders on cards supporting ATI_shader_texture_lod or EXT_gpu_shader4. It ruined performance on my 8800 when I tried using it to overcome a similar problem, though.

Nico: thanks for the tip.

AlexN: strange, I just tested texture2DGrad on an NV 7800 GTX and it seems to work at full performance.

Y.

Isn’t there a simple trunc() function (frac() sounds wrong, as 1.0 sounds like it’d become 0.0 - isn’t that the error btw?)?

I tried it again on my 8800 gtx and 7800 gtx, and found that texture2D and texture2D with a large negative bias both run at full speed, but texture2DLod runs at half speed on my 8800, and texture2DGrad runs at 1/4 speed on both cards. This is from a relief mapping shader that is entirely texture fetch bound… perhaps it is less noticeable in more ALU-bound shaders? I wasn’t able to entirely mask the difference by adding ALU ops, though.

I tried again, and there’s indeed a small difference. I increased the amount of texture samples and kept the number of ALU instructions constant, and sampling 32 times I get 120 fps with texture2DLod and 200 fps with texture2D. All on a 8800 GTX under Vista.

I have tested texture2DLod on an ATI X1950, and it doesn’t complain that it’s used in a pixel shader (unlike what the GLSL spec says) which is a good news. So I think my problem is now solved, I can get sub-texture wrapping WITH mipmapping.

Y.

Sorry, to revive this old thread but I have a similar need as Ysaneya. I need to calculate the level-of-detail value (aka mipmap lambda) for accessing different textures and so can not rely on the texture2D samplers. I just wanted to air the idea of suggesting exposing the automatically computed value in fragment shaders. Wouldn’t this make sense as a suggestion for the next GLSL version? Yes, we can re-compute it manually using derivatives but if the implementation is already required to compute (although possibly an approximation, GLSL spec 2.1, p. 173-174), why not expose the value to avoid doing redundant work? Maybe I should post the idea in the “Suggestions…” forum.

@Ysaneya: if you havn’t seen discovered it by now. The texture2DLod is available in the fragment shader (although the Shading Spec 1.20.8 I am looking at, hasn’t been corrected) but only when using the extension EXT_gpu_shader4 (http://www.opengl.org/registry/specs/EXT/gpu_shader4.txt, search for Add to section 8.7 “Texture Lookup Functions”). So it should be safe on ATI hardware that supports this extension.