Texture sampling artifacts and floor()


I have a strange texture sampling artifact when using a binary ramp (constructed with the floor() function) to access a texture. The result (which seems to be wrong in my opinion) can be seen in this image:


Here the shader code that constructed this image:

#ifdef GL_ES
precision highp float;

uniform vec2 resolution;
uniform sampler2D tex0;

void main(void)
vec2 tc = gl_FragCoord.xy / resolution.xy;
float idx = max(floor(sin(tc.x*10.0)*1.5),0.0);
idx *= 0.5;
idx += 0.1;

vec3 colTex = texture2D(tex0,vec2(idx,0.1)).xyz;
vec3 colIdx = vec3(idx);

gl_FragColor = vec4(colTex ,1.0);

tc is running from 0…1.
idx creates a repeating ramp pattern with zeros and ones. These are further transformed to values of 0.1 and 0.6 that are used to access a texture. Thus, the resulting image should only have two colors and not four.

If you have a Weg-GL enabled browser you can easily reproduce the problem by feeding the shader code into that little ShaderToy:

I used a simple 4x4 gradient texture that can be found here:

The problem seems not to be driver related since tested on several machines: ATI, nVidia, different driver versions, even reproduced on mobile devices. Different texture filtering settings shouldn’t play a role. MipMapping was also turned off in my own version. AA and Anisotropic filtering was also turned off.

Has anyone obseved similar patterns?
How can this be explained?

No it doesn’t, it creates a pattern that changes pretty quickly between 0 and 1, but there is still a gradient between those two states, it’s just that it’s really small.

try something like this instead for true binary states

float idx = 0.1;
if (sin(tc.x*10.0)>0.5) idx = 0.6;

Thanks, that would explain the behaviour. Still, I do not understand why floor() would generate such a small gradient between the two states?

With your proposed modifications I still get four colors and no true binary state. Maybe the compiler is remapping the code to some internal function like floor or ceil.

The same effect can be observed with something like this:

float idx = max(step( (sin(tc.x*10.0)*1.5),0.5 ),0.0);

Two things:

Different texture filtering settings shouldn’t play a role.

They should. If you are trying to sample a specific color, then you need to use nearest sampling and no mipmaps, else it’s obviously going to smudge things over.

I used a simple 4x4 gradient texture that can be found here:

Is this the actual texture file you use? If so, stop using a JPG and use something that won’t actualy change your colors like a TGA or a BMP. Because of the way JPG compresses images, it’s probably creating artifacts. Same thing can happen with DDS BC1 and BC2 compressions, so go for uncompressed (or lossless compression) files.

Perhaps that will help.

Generally you are right, those points you mentioned are important. But for the sake of this example it does not matter at all.

My point is that I feed the texture sample with two (and only two!) different coordinates but I get four different colors in the end on every platform I tested this on.

I just did a mock up run of your shader (as in, I changed the resolution to be a constant value). I checked the JPG image and it’s not actualy applying any compression, which makes sense since the image is so small. Not big enought to even be a block. The colors from the texture are solid according to photoshop, as expected from the size.

Running your shader gave me a perfectly solid image of three green bars and black with no ramping colors when using nearest and no mipmap filtering, as I expected. Wrapping is set to repeat, but this shouldn’t matter.

Running the same but with the texture’s filter set on linear and no mipmaps gives a smudged ramp faded green and faded black, as I expected too. Two solid colors but affected by the linear filter.

Finaly, running the same but with the texture’s filter set on trilinear (linear filter + linear mipmap filter) gives me the smudged with red ramp you get in your image.

So, I say again:

Are you sure you are disabling filtering? My results are as expected and the only change I made to your shader was this:

        vec2 resolution = vec2(640,480);
	vec2 tc = gl_FragCoord.xy / resolution.xy;
	float idx = max(floor(sin(tc.x*10.0)*1.5),0.0);
	idx *= 0.5;
	idx += 0.1;

	vec3 colTex = texture2D(ramp,vec2(idx,0.1)).xyz;
	vec3 colIdx = vec3(idx);

	fragment = vec4(colTex ,1.0);

Edit: Perhaps I should add that I checked the resulting output in photoshop to make sure adjacent pixels were not getting smudged. Result is what I described; it wasn’t just a visual judgement. Texture ramp image was loaded as TGA, but it shouldn’t matter unless your JPG loader does funky stuff with small images; could be a bug there that perhaps is not handling the case well? Just figured this information could be useful.

Thanks for the points. But I still do not understand the behaviour of GL here. Even with bilinear (or trilinear) filtering shouldn’t I get the same result from the sampler when I feed the same coordinates - at least on a plane?

Only if the GL picks the same LOD each time.

You should get the same result but not the result you were expecting. I mean, the output shouldn’t vary with the same input, obviously.

However, the problem is basicly that you are invoking the magnification filter every time you render the information in this texture. The texture is 4x4 and your screen is whatever else but bigger. I haven’t really stoped to analyze what you are actualy doing, but it doesn’t matter; point is you will be mapping some of those 4x4 pixels (even if always the SAME!) to a big area in screen that takes more than 4x4 pixels.

Thus, the maginificatin filter will kick in.

If it’s bilinear, the texture will sample the exact pixel you asked for and the samples around it. That’s why when you use bilinear you’ll see a two color ramp but with faded colors.

If you use a trilinear (thus using mipmaps), the sample will be taken similarly to the previous one but also the lower mipmap will be involved, sampled and averaged. So here you’ll get red mixed in between.

Being a 4x4 texture, it will have 2x2 and 1x1 mipmaps and who knows how a mipmap of this texture looks like (it’s going to be dependant on the mipmap generator algorithm if you don’t provide one).

The fact that you are using a flat screen aligned plane has really nothing to do with the magnification or minification filtering. Anisotropic filtering is the only filter that won’t kick in with a full screen quad, assuming I understood the way AF works (which I honestly haven’t looked much into it, since it’s literaly oh so automatic).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.