Does anyone know how to determine the number of scalars output from a geometry shader? In their programming guide, NVIDIA recommends making this number no greater than 20 for peak performance on the GeForce 8800 GTX. More recent hardware must have a larger upper bound.

I’m not sure if the number of scalars output is as simple as the number of vertices times the size of each vertex. How do you determine the size of each vertex? Does an output with the flat qualifier count? Does every member in gl_PerVertex count? Or just the ones I write to?

out gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
};

The reason I ask is because if you are doing something simple like point sprites in a geometry shader, the output is at least gl_Position (4 scalars) and a texture coordinate (2 scalars) per vertex, for a total of 24 scalars - above NVIDIA’s limit of 20. Unless, you can compute the texture coordinate in the fragment shader and this results outputting less.

the output is at least gl_Position (4 scalars) and a texture coordinate (2 scalars) per vertex, for a total of 24 scalars - above NVIDIA’s limit of 20.

That’s more of an GLSL problem. GLSL defines gl_Position as a vec4, while HLSL allows the user to define it as a vec3 (presumably the hardware fills in a 1.0 for you).

Also, 20 is their limit for “peak performance.” So it’s a question of what you’re giving up in one location for another.