I can post pics if need be, but I’m seeing unsightly artifacts with a UV editor using GL_NEAREST. Essentially it’s just drawing a texture in a rectangle, arbitrarily.

What I see is pixels on a single row are seemingly (sometimes) randomly select from the closest two rows. Same deal with columns.

I haven’t seen this problem anywhere else on my system. I’d guess it’s a driver artifact. Is this a common experience? Is there an OpenGL extension or API for tihs?

NOTE: Simple math should consistently draw a line of pixels from one row or column or another, but not two. It’s as if there is an element of randomness in the sampling.

Could be that the texcoords being chosen are right on the edge between 2 texel colums (or rows), causing the texel that is selected to be nearly random based on floating (or fixed-) point inaccuracies in the computation + rasterization.

Right. But the math isn’t necessarily simple. Bear in mind that OpenGL doesn’t draw rectangles, it draws triangles. What might be happening (and I hasten to add that this is little more than a guess) is that the calculation of barycentric coordinates from window-space coordinates results in values which (due to the nature of floating-point arithmetic) aren’t constant for a row or column even though (theoretically) they should be. In the general case (where vertices may have differing W coordinates), the barycentric coordinates are obtained from window coordinates by equations of the form:

A = a0 + ax*x + ay*y
B = b0 + bx*x + by*y
C = c0 + cx*x + cy*y
w = (A + B + C)
a = A/w
b = B/w
c = C/w

In the case of a triangle where all three vertices have identical W coordinates, the calculated w above should be a constant (i.e. ax+bx+cx=0, ay+by+cy=0). But floating-point being what it is, there may be minuscule variations from one pixel to the next.

And if you’re sampling a texture exactly on texel boundaries with GL_NEAREST, an error in the least significant bit is enough to change which texel is sampled.

I think that much is clear. What may be less clear (without some understanding of the underlying process) is why you don’t simply get s varying with x alone and t varying with y alone. IOW, it’s not about the exact choice of texels being undefined but about it being inconsistent.

Ultimately, I suspect it’s because the system is designed for projective mappings and terms which cancel algebraically don’t actually cancel when it comes to floating-point arithmetic.

I had to fuss around with it a lot to get these nibbles to appear both on the edge and column. They happen to fall along the center line, but I’ve another with them at 16px offset, along the top of the white and gold squares. I don’t know if they fall on 16px boundaries always, but may be.

My intuition is it could be a defect in a chunk based sampler. I use nearest-neighbor in my work with Direct3D and have not observed this kind of effect on this system. Because the corners in the white outline don’t form hard 3px right angles, I think that it’s probably the projection matrix can be offset some in this 2D case.

I just find it odd, and I felt curious to ask for other’s experiences, to know if this is a common experience, and if so, what strategies exist. It’s an interesting phenomenon to me.

@GClements my sense is if it was down to the triangle based UV mapping the pattern would be different… like it would more likely run for so long on one row, and then switch over to the row above or below for the rest of the primitive, but not bounce back and forth seemingly randomly.

Lesson: I scooted the projection over a half pixel, which made the outline square as I’d anticipated. The problem remained, but making the adjust slightly less than a half pixel seems to have eliminated the artifact on this particular hardware (Intel Iris Pro 5200.)

I would expect a nearest neighbor sampler given rectilinear coordinates to round off arbitrarily, but at least consistently, i.e. no guarantee on which row/column you will get, but never both! I want to refrain from speculating what could be happening. That’s what I’m here for after all.

EDITED: I felt I spoke too soon, and so deleted this post (I don’t know if casual deletion is something I can do only because of my moderator status) because the problem soon reared its head again… however, I what happened is I toned down my first half-offset by two orders of magnitude, and that negated its usefulness. It’s odd that changing the projection matrix seems to work, when the problem is clearly rooted in the GL_NEAREST sampling. It doesn’t occur elsewhere… where UI element textures are displayed 1:1.

The (presumed) issue isn’t the split between triangles, it’s that what should be e.g. s=a+bx (i.e. entirely independent of y) actually ends up as something like s=(a+bx)/(x+y+(1-x-y)). The denominator should be a constant 1 but is actually 1 +/- some rounding error which depends upon both x and y.

Assuming this is what’s happening, it’s a consequence of the fact that the math for rendering triangles with perspective-correct texturing is a bit more involved than the math for scaled sprites (or even rotated sprites, for that matter). If the hardware only supported linear texture mapping, there wouldn’t be a denominator and the issue wouldn’t arise.

I can’t be certain that this is the reason, but nor can I think of anything else which would cause it. For an axis-aligned triangle, the relevant coefficients in the numerators should be exactly zero, hence my assumption that it’s the denominator.

Possibly. My assertion hinges on the presumption that hardware people would go to great lengths to guarantee the correctness of square projections.

Fixing the projection mapping to pixel space seems to make the problem go away. My guess is the hardware is marrying the framebuffer and texture somehow, so that the math depends on the projection mapping to the center of pixels, even though it seems theoretically isolated to texture sampling. I.e. somewhere (at some stage) in the hardware it might switch over to finite units.