Geometry shader returning incorrect gl_Positon

So, I’ve been writing a geometry shader for expanding a point out to a quad for drawing billboards. It wasn’t working correctly, so I simplified it to ignore the vertex shader output and just generate a quad in the middle of the screen:

layout ( points ) in;
layout ( triangle_strip, max_vertices = 4 ) out;


out vec2 gsTexCoord;
out vec4 gsColour;


void main()
{
    gl_Position = vec4(-0.5, -0.5, 0, 1);
    gsTexCoord = vec2(0, 0);
    gsColour = vec4(0.1);
    EmitVertex();


    gl_Position = vec4(0.5, -0.5, 0, 1);
    gsTexCoord = vec2(0, 0.5);
    gsColour = vec4(0.2);
    EmitVertex();


    gl_Position = vec4(-0.5, 0.5, 0, 1);
    gsTexCoord = vec2(0.5, 0);
    gsColour = vec4(0.3);
    EmitVertex();


    gl_Position = vec4(0.5, 0.5, 0, 1);
    gsTexCoord = vec2(1, 1);
    gsColour = vec4(0.4);
    EmitVertex();


    EndPrimitive();
}

With a fragment shader which just returns constant vec4(1), I get the expected output:

[ATTACH=CONFIG]173[/ATTACH]

However, if I change the fragment shader to return gsColour, the positions of the quad vertices change:

in vec2 gsTexCoord;
in vec4 gsColour;


out vec4 Colour;


void main()
{
    Colour = gsColour;
}

[ATTACH=CONFIG]174[/ATTACH]

In fact, if you look closely at the geometry shader code, you may notice that the texture coordinates are being output as gl_Position!

It would seem, that reading gsColour in the fragment shader is somehow causing the x and y values of gl_Position to be overwritten by the values stored in gsTexCoord.
Please, someone explain what incredibly stupid thing I’m doing wrong here?

Three questions:

a) Is you input primitive type correct, i.e. are you invoking gl[Multi]Draw*() with GL_POINTS?
b) Does anything change if you put gl_Position as the last assignment right before EmitVertex() ?
c) Does anything change if you put EndPrimitive() after the thied vertex finalizing the first triangle of the strip? I think it shouldn’t matter but I’ve seen code do this.

In general your shader seems correct. The triangle winding is correctly specified CCW, each output variable is written to after each EmitVertex() and the layouts seems correct. What isn’t correct is the viewport mapping. Since you are drawing a square it should also appear as a square on screen. You should correct the aspect ratio used in your projection matrix.

I can’t see much difference to my code except I happen to do the colour then the texcoord.

Yep! That fixes it. This definitely doesn’t seem right though, perhaps a bug in AMDs drivers?

Well I was outputting directly into normalized device coordinates, and my window is 16:9, so the output will look rectangular.

Anyway, many thanks.

Does anything change if you put gl_Position as the last assignment right before EmitVertex() ?
Yep! That fixes it. This definitely doesn’t seem right though, perhaps a bug in AMDs drivers?

That is interesting. What driver version are you using? I haven’t had that probelm on my AMD system.

Yep! That fixes it. This definitely doesn’t seem right though, perhaps a bug in AMDs drivers?

I guess so. There is nothing in the spec that says that writes to gl_Position must not appear before writes to other outputs. Nor would such a restriction make any sense. Aside from that it would be an easy to miss fault which cannot seriously be encouraged by implementors.

Well I was outputting directly into normalized device coordinates, and my window is 16:9, so the output will look rectangular.

You’re absolutely right. However, is that really what you intended? If so, using the geometry shader wouldn’t make a lot of sense. :slight_smile:

Heh, well originally I was expanding the points into billboards facing the camera in view space, and then projecting them via the projection matrix. That code just ended up as in the OP as I simplified it as much as possible while trying to narrow down where the error was coming from.

Catalyst 12.4 with a 6870 on Win7 x64.

So what’s your original shader (with the above corrections)?

Here’s the complete shader. Bit of a warning here, though, that it has not been properly tested. I’ve probably got a cross product the wrong way round at least…
vsRectangle is storing minimum and maximum texture coordinates, and vsSize is the size of the billboard in world space.


layout ( points ) in;
layout ( triangle_strip, max_vertices = 4 ) out;

uniform mat4 View;
uniform mat4 Projection;

in vec4[] vsColour;
in vec4[] vsRectangle;
in vec2[] vsSize;

out vec2 gsTexCoord;
out vec4 gsColour;

void main()
{
    vec3 pos = gl_in[0].gl_Position.xyz;

    vec3 viewPos = (View * vec4(pos, 1)).xyz;
    vec3 x = normalize( cross( viewPos, vec3(0,1,0) ) );
    vec3 y = normalize( cross( x, viewPos ) );

    vec3 dx = x * vsSize[0].x;
    vec3 dy = y * vsSize[0].y;

    gsTexCoord = vsRectangle[0].xy;
    gsColour = vsColour[0];
    gl_Position = Projection * vec4( viewPos - dx - dy, 1 );
    EmitVertex();

    gsTexCoord = vsRectangle[0].zy;
    gsColour = vsColour[0];
    gl_Position = Projection * vec4( viewPos + dx - dy, 1 );
    EmitVertex();

    gsTexCoord = vsRectangle[0].xw;
    gsColour = vsColour[0];
    gl_Position = Projection * vec4( viewPos - dx + dy, 1 );
    EmitVertex();

    gsTexCoord = vsRectangle[0].zw;
    gsColour = vsColour[0];
    gl_Position = Projection * vec4( viewPos + dx + dy, 1 );
    EmitVertex();

    EndPrimitive();
}

Hmm, I think you want a billboard constrained to rotate around y, correct? How do you account for the rotation? In the vertex shader and simply expand the resulting quad in the geometry shader? Currently I don’t see why you would need a geometry shader to do that.

I don’t want the rotation constrained in any axis in this version of the shader.
I’m using the geometry shader so that I only need to send one vertex for each billboard. It simplifies the rest of the system. It’s also nice later when you want to create a particle system, as you could simulate the particle updates in the vertex shader (perhaps with transform feedback), so the vertex shader is only doing 1/4 of the work it would be if you were sending over a whole quad for each particle.

At second glance I can see that the coordinate base is align with the eye space vector of your point, i.e. the eye space vector is the -z axis of your coordinate system. As I picture it in my mind you actually do a rotation with that and it seems pretty clever to me.

Although I assume you’re aware of it mind that in the following code


vec3 x = normalize( cross( viewPos, vec3(0,1,0) ) );     
vec3 y = normalize( cross( x, viewPos ) );

x may become the zero vector if viewPos and vec3(0, 1, 0) are parallel and thus y subsequently becomes the zero vector. If you’re only in a first-person sort of view that’s fine. In a third-person view, however, this might cause problems.

so the vertex shader is only doing 1/4 of the work

One might think that that will always be an improvement. However, especially with a lot of particles and thus a lot of primitives generated, you should profile if using the GS in this way doesn’t degrade performance substantially . Do you intend to use a single particle origin and emit particles from there using the GS?

The camera should always be at the origin in view space, unless you are doing something funky with your view and projection matrices (and “camera position” doesn’t make much sense with an orthographic projection).

I’ll create one vertex for each particle, so the geometry shader will only ever expand one point into a quad. It won’t be producing a variable sized output. I will need to profile it once I’ve got a proper system set up with some real world examples. I would imagine a particle system would likely be fill rate limited long before any overhead from the geometry shader becomes important, though, so I’ll probably keep it for its simplicity.

I would imagine a particle system would likely be fill rate limited long before any overhead from the geometry shader becomes important[…]

If particles cover only a small part of the screen, i.e. a negligible amount of fragments is rasterized and your fragment shader isn’t ridiculously complex to begin with, the application won’t become fillrate limited. If the number of GS writes is sufficiently high though and isn’t reduced with respect to the distance of the viewer to particle system then it will easily limit your app. Also, in my experience the geometry shader performance varies dramatically between hardware generations.

Someone should do some profiling on GS performance on newer cards.

It seems like for some reason gl_Position and gsTexCoord is aliased and they both use the same “physical” interpolant in this case.
Have you tried using the gsTexCoord in your fragment shader? I’m pretty sure that even though gl_Position is now working, because you have written it the last, but now gsTexCoord will have the values written to gl_Position too. Can you check this?

Btw, are you using separate shader objects? As there gl_Position has to be explicitly declared. Even if not, I would give it a try to explicitly declare gl_Position as specified in the GLSL spec.

[quote] Catalyst 12.4 with a 6870 on Win7 x64. {/quote]
Your driver is new than mine. I will update an see if my code still works

The texture coordinates appear to be working fine in the fragment shader, and redeclaring the gl_PerVertex block did not allow me to assign gl_Position earlier (I wasn’t using separate shader objects).

In that case you should file a bug with AMD.

I just updated my driver on a 5870 and it works fine. This draws a cross facing the camera at each point

vertex shader


void main()
{
  gl_Position = u_CameraModelView * vec4(Position,1);
  vData.vColour = Colour;
}

gs shader


layout( points ) in;
layout( line_strip, max_vertices = 4 ) out;

in vData_Struct
{
  vec4 vColour;
} vData[];


out vec2 gTexCoord;
out vec4 gColour;

uniform float u_Radius;  

void main()
{
  gl_Position = u_CameraProjection * (vec4(-u_Radius,0.0,0.0,0.0) + gl_in[0].gl_Position);
  gColour = vData[0].vColour;
  gTexCoord = vec2(-1.0,-1.0);
  EmitVertex();

  gl_Position = u_CameraProjection * (vec4(u_Radius,0.0,0.0,0.0) + gl_in[0].gl_Position);
  gColour = vData[0].vColour;
  gTexCoord = vec2(1.0,-1.0);
  EmitVertex();

  EndPrimitive();

  gl_Position = u_CameraProjection * (vec4(0.0,-u_Radius,0.0,0.0) + gl_in[0].gl_Position);
  gColour = vData[0].vColour;
  gTexCoord = vec2(-1.0,1.0);
  EmitVertex();

  gl_Position = u_CameraProjection * (vec4(0.0,u_Radius,0.0,0.0) + gl_in[0].gl_Position);
  gColour = vData[0].vColour;
  gTexCoord = vec2(1.0,1.0);
  EmitVertex();

  EndPrimitive();
}