The OpenGL spec has a lot of little cruft, and this is one piece of it. The point sprite specification is already basically two concepts: old-style point sprites from pre-GLSL days, and GLSL-style point sprites. And the separation these days is pretty distinct.
Except for clipping.
For some reason, the spec still says that point sprites must be clipped like old-style hardware-based point sprites. That is, by the actual point, rather than by the area generated from the size computations.
This is stupid. NVIDIA doesn’t even bother to clip correctly. So the ARB may as well formalize existing practice and just make it work correctly.
I agree that there should be some cleanup regarding to point sprites. Maybe I would go even that far to make fixed function point sprites deprecated and force the use of geometry shaders to produce point sprites.
AFAIK, the way how NVIDIA and ATI implement fixed function point sprites on current hardware is different:
ATI uses internally a geometry shader to produce the point sprite primitive as it has a fast path for 1:4 geometry amplification (which is needed for generating a triangle strip representing a quad from a point primitive). This way, they are able to clip the point primitive in the geometry shader before outputting any primitives to the subsequent stages of the pipeline.
NVIDIA has some fixed function hardware for generating point sprites, but that hardware works so that it does not clip the input point but rather the output quad. In order to emulate the former as that is required by the spec it adds some sort of padding.
I have to admit that neither of these is confirmed by ATI and/or NVIDIA but from what I’ve read related to the issue (in GPU programming guides and some extension specifications that mention point sprites, which I don’t remember exactly) it seems that maybe this is really the case regarding the vendor implementations.
I know that most people are against deprecating fixed function as it is more convenient to use it rather than writing a shader that does the job for you, but it seems that the only simple and complete solution would be just to deprecate fixed function point sprites and promote geometry shader based point sprites.
For reference on point sprite clipping with respect, the spec says that clipping of a point sprite is an all or nothing kind of deal.
I.e., that screen aligned square is not generated if any of gl_ClipDistance's are less than 0 or if the point (i.e. the center of the quad) is outside of the clipping volume.
Additionally, having used point sprites myself I can confirm this behavior on atleast NVIDIA hardware where a point sprite, as soon as the center was not within the clipping volume or the user defined clipping volume the sprite was not drawn at all. For user defined clipping volumes this makes perfect sense since there are no varyings to interpolate.
As for deprecating point sprites, I don’t think that is a good idea. Indeed, the main complaint most people have with point sprites is when the whole thing gets clipped if the center is outside of the view volume. I’d propose something simpler:
Just as GL_DEPTH_CLAMP disables the -w<=z<=w test for points, lines and polygons, a set of new disable/enable flags to control which of -w<=x<=w and
-w<=y<=w are used to decide if a point is clipped together with rasterization rules that defines where it is drawn (naively, just let xn and yn be the normalized device coordinate and rasterize the interesction of the rectangle from the point sprite that intersects the viewport (which will be a rectangle anyways). Once the clipping of -w<=x<=w and -w<=y<=w are taken out, the implicit clipping of w>0 needs to be enforced.