I’ve been hitting a simple (but what seems to be an un-solvable) problem for the past few days.
I’m trying to implement texture wrapping in a pixel shader.
Normally, it should be as easy as simply doing a “fract(uv)” when sampling the texture, but it doesn’t seem to work well when mipmapping is enabled.
I get a 1-pixel seam between the “tiles”. The width of this seam is always 1 pixel, whatever the distance the camera is to the surface of the triangle, mignified or magnified, doesn’t matter.
When only using linear filtering, there is no seam.
Same behavior on Nvidia or ATI cards.
A picture of the problem:
http://www.infinity-universe.com/opengl/wrapping1.jpg
The scene is a simple planar square with UVs going from 0 to 1. The texture is nothing special, and tiles perfectly well of course.
The shader:
gl_FragColor = texture2D(diffuseTex, fract(gl_TexCoord[0].xy * 16.0));
In the pic above, the texture is magnified (very close to the surface), so shouldn’t it theorically only sample mipmap #0 in the mipmap chain ?
But when I offset the LOD by -10 for example in the texture sampler call:
gl_FragColor = texture2D(diffuseTex, fract(gl_TexCoord[0].xy * 16.0), -10);
… the seam disapears. Same when I force the min/max LOD levels to be 0.
Does anybody have any idea to remove this seam ?
Y.