NV_point_sprites - argh!

Hi

All i want to do is to use NV_point_sprites instead of calculating billboards myself and sending them as quads.
However, those stupid pointsprites change their size when the camera moves towards them or away from them.
I really only want them to have one constant size (like a radius), so that their size is only affect by the perspective, as everything else too (just like my quads would do).

Is there a way to disable this stupid behaviour? And is there an explanation for what one would need such stupid behaviour?

Thanks in advance,
Jan.

This should help you : http://oss.sgi.com/projects/ogl-sample/registry/EXT/point_parameters.txt

I already tried GL_DISTANCE_ATTENUATION_EXT out. I didn´t get it to work. My pointsprites simply disappeared completely when i used that, no matter which parameters i used.
And to be honest, i don´t understand that function. If you could explain what i have to do to get “simple” particles, i would be grateful.

Jan.

I haven’t worked with point sprites in OpenGL, but I have in D3D for a particle system, so my experience may not apply to OpenGL. But anyway, in my experience point sprites are way too limited to be all that useful. Points are constructed in screen space rather than world space. Size is determined by attentuation factors and by a point size provided by the vertex shader or from the vertex array if there’s no shader bound. There is also an implementation dependent upper bound of how large the point sprite can be in screenspace. In D3D with Radeon 9700 it is 256. This means that when I get very close to the particle system the particles seems to shrink as they hit the upper limit, which ruins the effect. I see no reason why there must exist a upper limit at all. This was the number one reason I decided to ditch point sprites all together and go standard quads instead. And since I’m mostly fillrate limited anyway it didn’t affect performance much either.

Well, that´s exactly what i was thinking.
I always thought they were really useful, but that seems not to be the case.
It´s a pity.

Thanks,
Jan.

PS: Does anyone know, if the ARB_point_sprite extension will be more useful?

Originally posted by Jan2000:
PS: Does anyone know, if the ARB_point_sprite extension will be more useful?

I’ve also tried to use Point-Sprites on my old GF4 (via GL_NV_POINT_SPRITES) and dropped it for the one reason Humus already told : They’re too limited, and btw. are not supported on all gfx-cards.

And for GL_ARB_Point_Sprite : I only took a quick look at the extension, and it seems that it’s only a minor changed (for example no R-Spritemode) NV_Point_Sprite.
(Spec is there : http://oss.sgi.com/projects/ogl-sample/registry/ARB/point_sprite.txt) )

[This message has been edited by PanzerSchreck (edited 09-03-2003).]

I’m certain what you are trying to do should be supported. It seems you have a bug or point sprites are horribly broken by definition. I tend to think it’s just a bug or incorrect use.

Have you tried distance attenuation set to 1.0? I think this is the desired value to provide constant size and ofcourse make sure clamping is off.

Remember this deliberately doesn’t specify a quad, it is a point and in some implementations the point sizes are clamped as some maximum value, (64 pixels for example on some nvidia cards) so when you get too close and the size clamps the quad will appear to shrink in relation to the scene unless you draw another way for close sprites.

I’m pretty sure this is still a very useful extension and meets it’s intended purposes, including yours.

Hello,

just a suggestion : maybe it is possible
to compute from the view frustum the size
in pixels of your sprite.
Afterward you have to set the sizes of all your sprites
at each iterations but a simple vertex shader could do that easily…
But I’m not sure this will be more efficient than
quads…

Escondida

This is the code I use for point sprite distance attenuation.

glPointParameterfEXT = (PFNGLPOINTPARAMETERFEXTPROC)
wglGetProcAddress("glPointParameterfEXT");
glPointParameterfvEXT = (PFNGLPOINTPARAMETERFVEXTPROC)
wglGetProcAddress("glPointParameterfvEXT");
static GLfloat quadratic[3] = { 0, 0 , 0.0000000005};
glPointParameterfvEXT(GL_DISTANCE_ATTENUATION_EXT, quadratic);
glPointParameterfEXT(GL_POINT_FADE_THRESHOLD_SIZE_EXT, 0);
glPointParameterfEXT( GL_POINT_SIZE_MIN_EXT, 0 );
glPointParameterfEXT( GL_POINT_SIZE_MAX_EXT, 1000 );

On an GF5900U the maximum point sprite size looks like 512.

[This message has been edited by Adrian (edited 09-03-2003).]

If not this pesky upper limit existed it would work quite fine. Do anyone know why it exists at all?
A max size of 256 for instance on the 9700 is not all that large when you’re running in 1600x1200. The effect is lost long before you get close to the particle system. If the limit was like 1024 or so it would probably work reasonably, but then again, why need there be a limit at all? I don’t see the purpose of having such a limit and I can’t see why hardware can’t generate quads of any size.

One of the current consoles also has a size limit on its particles to around 40-odd pixels. Most annoying. They also dont scale at all with modelview matrix commands. Each batch has to be scaled manually.

It would be nice for IHV’s to stop putting these limitations on particle objects.

We want;

  1. Proper scaling like an ordinary quad.
  2. No size limitation.
  3. Optional texture coordinates, so we can run anims on them, by coordinate shifting.

I think the main issue, is ppl would be happy to just use quads anyway, but they dont want to calculate the orientation of them all on the cpu.

I don’t see the purpose of having such a limit and I can’t see why hardware can’t generate quads of any size.

A limit is imposed because they don’t want to devote an arbiturary number of bits to the post-T&L function that generates quads from points. Having a post-T&L operation that generates geometry at all is painful, because this one of the parts of the pipeline that really needs to run fast.

  1. Proper scaling like an ordinary quad.
  2. No size limitation.
  3. Optional texture coordinates, so we can run anims on them, by coordinate shifting.

Then code it yourself. You have all the tools you need to in order to write a vertex program for point sprites. Yes, you have to send them as quads, but the vertex data itself can just be one point that gets used 4 times.

Not only that, you get a lot more flexibility. You can make 1D point sprites, for instance (cylindrical imposters).

Really, be glad they’re offering any kind of point-sprite functionality at all.

I think the main issue, is ppl would be happy to just use quads anyway, but they dont want to calculate the orientation of them all on the cpu.

Then send them in camera space (make ModelView identity). Then you don’t have to worry about that pesky orientation.

You are so damned right Nutty.

A limit is imposed because they don’t want to devote an arbiturary number of bits to the post-T&L function that generates quads from points. Having a post-T&L operation that generates geometry at all is painful, because this one of the parts of the pipeline that really needs to run fast.

Oh come on, if they can produce point sprites 256 pixels wide, why cant they make them 1024 pixels wide… Is the mathematical precision soo low they cant do this?

It doesn’t need to be post T&L. In fact when gpu’s start having primitive processors for GL2, they would make ideal billboard building parts of the gpu.

Then code it yourself. You have all the tools you need to in order to write a vertex program for point sprites. Yes, you have to send them as quads, but the vertex data itself can just be one point that gets used 4 times.

What if I dont have hardware Vertex Programs?

Really, be glad they’re offering any kind of point-sprite functionality at all.

Why do I have to be glad for a feature thats a pain for the majority of ppl who actually want point sprite functionality?

Then send them in camera space (make ModelView identity). Then you don’t have to worry about that pesky orientation.

hmmmm, haven’t thought about that, I’ll give it a try sometime…

Oh come on, if they can produce point sprites 256 pixels wide, why cant they make them 1024 pixels wide…

Because it takes 2 more bits.

It doesn’t need to be post T&L.

The spec says they do. A vertex program only operates on the single coordinate sent.

There would, also, be no way to do them as a pre-T&L stage, because you don’t know where the position data is, or what coordinate system is. The transform matrix to a vertex program could be anywhere. And if you can’t use them with vertex programs, then the functionality is weakened significantly.

What if I dont have hardware Vertex Programs?

Then you probably don’t have hardware point sprites either.

This is all pretty obviously limited by the attempt to generate a scaled region from a single point in hardware. It works well when used as originally intended (things like light points and particle systems with lots of small particles that would generally be clipped if they get close), but doesn’t work for legitimate uses where the particle size limits are exceeded. However for big particles the point sprite doesn’t help you much w.r.t. performance because it’s heavily gated by fill, so we’re really talking about OpenGL and hardware trying to solve an issue of application complexity.

It’s time to get smart about drawing your particles and don’t rely on some hardware to act as a catch all solution.

What’s being hinted at is a sprite capability, not a point sprite. Then there’d be no excuse for the hardware limits but the transform may be slower for many small particles.

I doubt this fixed functionality will happen, it is possible in a vertex program, and the whole point of programmability is to move away from more and more fixed function capabilities, in addition programmability is heading towards more flexibility and possibly even MIMD where it may ultimately be possible stream in points and generate quads & implement the right thing yourself (highly speculative on my part), even if you can’t you could output to a vertex buffer in hardware then draw the buffer, at which time of course point sprite operations will obviously work exactly as you’d like anyway and the dedicated back end for point sprite will be eliminated.

It’s a dilema and it’s not something anyone has a solution for, you aren’t going to get this fixed on older hardware and probably not even on recent hardware without losing much of the optimization and future hardware won’t have the problem. If you can’t solve it with a vp on existing hardware neither can vendors, if you can, then there’s no need for the extension. The whole reason of point sprite (and related earlier extensions) is to optimize for single point xform and small on screen rasterization which used to be unreasonably expensive on 3D hardware, or in fact had no solution but to use application software. That is no longer the case, and you need a compelling reason for an extension beyond “I don’t want to write a VP for this”.

Appealing to increase the sprite size is a good thing, it may not be on anyone’s radar in some companies.

[This message has been edited by dorbie (edited 09-09-2003).]