Fullscreen quad invoking fragment shader twice along triangle seam?

Yeah, that’s why I concluded that the triangles overlap, because the fragment shader should have any effect outside of the triangle.

This is my [abridged] frag code:

#version 420
#include <shaderCommon.h> //defs for ‘red’ ‘green’ etc
#include <gtest/AtomicCounter.h> //defs for binding locations

layout(binding = AC_ATOMIC_BINDING, offset = 0) uniform atomic_uint buffer_index_counter;

layout(binding = AC_IMAGE_EXCHANGE_UNIT, r32ui) coherent uniform uimage2D ex_image;

uniform uint clear_value;
uniform uint viewport_height;
uniform uint viewport_width;

out vec4 out_color;

void main
{
ivec2 pixel_coords = ivec2(int(gl_FragCoord.x),
int(gl_FragCoord.y));

uint count = atomicCounterIncrement(buffer_index_counter);
uint val = imageAtomicExchange(ex_image,
pixel_coords,
uint(gl_FragCoord.x));

if((val == uint(gl_FragCoord.x)) && (val != clear_value)) {
out_color = white;
}
else {
out_color = green;
}
}

Note that prior to the shader invocation, ex_image contains clear_value in every pixel. So if this shader runs once on a pixel, it should output green, but if it runs twice, it should output white.
I would expect the output to be pure green, as the fragment shader should only be run once per pixel. Indeed, in windows, I do get pure green.

Strangely, the line appears to be anti-aliased (try zooming way in using an image-editing program).

This is definitely a line (heh) of thought worth pursuing. I’m not familiar with MSAA though. How would the expected number of samples differ between MSAA and a buffer where the triangles simply overlap?

And why might GL be running in MSAA mode when I create a context with a non-MS buffer and never explicitly enable it?

[QUOTE=AlexN;1241304]Is the overlapping region a single pixel wide, or is it sometimes a couple pixels wide and blocky? If a single pixel, that sounds like a precision issue. If more than one pixel and blocky, it’s likely that you’re seeing an artifact from the GPU invoking your fragment shader in 2x2 (or larger) groups of pixels for efficiency. Normally this is not visible because the extra fragments are discarded, but if you make accesses to main memory then there can be side effects.

Either way, the easiest solution may be to change how you render your full screen quad. Try using a single triangle that is larger than your viewport and fully contains it. This is typically more efficient anyway as it avoids the wasted fragments along the seam between two triangles due to the pixel grouping mentioned above.

Edit: I take back the bit about accessing main memory from a helper fragment having side effects. Looks like that is not supposed to happen, though there could be a bug you a hitting if so. From http://www.opengl.org/registry/specs/ARB/shader_image_load_store.txt

(22) If implementations run fragment shaders for fragments that aren’t covered by the primitive or fail early depth tests (e.g., “helper pixels”), how does that interact with stores and atomics? RESOLVED: Stores will have no effect. Atomics will also not update memory. The values returned by atomics are undefined.[/QUOTE]

This is certainly something I can check for. I could render a triangle with the texture as a render target, then render again with the texture bound as an image that I write too.

[QUOTE=Dan Bartlett;1241307]Does having them wound the same direction have any effect?

eg. Instead of:
{0, 1, 2, 1, 2, 3}
using
{0, 1, 2, 1, 3, 2}.

AFAICT if either way is used only one polygon should produce a fragment along the common edge, so you’ll probably need to wait for a driver fix. Triangles wound the same direction may be a more tested path though.
[/QUOTE]

No effect.

I’m planning on preparing a self-contained test program to submit with a bug report. I’ll give this a try tomorrow while I’m doing that.

Thanks for all the help and suggestions, guy.

And why might GL be running in MSAA mode when I create a context with a non-MS buffer and never explicitly enable it?

That’s an easy question to answer: your driver settings panel has switches that can force applications to use anti-aliasing for the main framebuffer.

The way to prevent this is to create render targets manually yourself. But you should be able to detect it by using glGetIntegerv(GL_SAMPLE_BUFFERS); when the default framebuffer is bound to the GL_DRAW_FRAMEBUFFER. It should be 0 if multisampling is not available.

glGetIntegerv(GL_SAMPLE_BUFFERS); returns 1.
glIsEnabled(GL_MULTISAMPLE); returns false.

So what happens if you create your own renderbuffers and render to those?