ARB_point_sprite - where do I start?

I am working on a particle system, and I want to use the ARB_point_sprite extension.

I have never done this before, not even using quads, so I am looking for some kind of example to help me along.

I use the GLEW library, and I would like to keep on doing that, since it works fine.

If no working example is avalible, is there anywhere to find info on using extensions in general. I am into the ide that OpenGL has to fetch functionpointers to make use of the extensions, but my knowledge on practical use stops there.

is it like so:





Try googling for some examples.

A very simple demo of GL_ARB_point_sprite can be found in the Mesa distro in progs/demos/spriteblast.c It’s based on an older demo by Mark Kilgard.


do you need explanation of point sprite (I guess this is very easy, I only know NV_point_sprite but this is extremely easy to use), or of how to set up extensions, or of how to build a particle system in theory?


I will go for the mesa example for now. I have the particle system running, I just need to change the particles to a sprite instead of a GL_POINT.

Thanx for the help :slight_smile:

Don’t forget that even though all nvidia cards (from gf2 and on) support ARB_point_sprite , from the ATI cards, only the radeons better or equal to 9500 support it .

My guess is that nvidia have it implemented it in hardware (and initially exposed it with their nv_point_sprite extension) whereas ATI just implemented in its card that support floating point pixel shaders.

You can’t reasonably draw a distinction between implementing in hardware and implementing as a shader that runs on hardware. Currently large sections (most) of the OpenGL fixed function pipeline is actually implemented as shaders by both vendors. Previously shaders didn’t exist and hardware microcode was not exposed. Now there’s less of a distinction for a lot of stuff.

There is a genuine bandwidth issue with a single point xformed to multiple points in shader vs hardware (+shader) so point sprite is unique in some ways w.r.t. requirig real hardware support, but I wouldn’t bet on that existing or not existing on anyone’s hardware.

[This message has been edited by dorbie (edited 02-20-2004).]

Dorbie, you are right that it is difficult to distinct between what is done with hardware and what is done with shaders in hardware.

I said that since older nvidia cards support it then nvidia has an implementation that is not done with shaders (of course they may changed that it newer cards). Whereas in ati they started offering it in their floating point shader cards so it seems possible that they use them in their implementation, because if they didn’t, it should be available on 9200 and lower.

In any case the point of my post was to mention that it is necessary to implement an alternative way to do sprites/billboards because otherwise the application will not run in some cards that are still widely used. The part about how it is implemented is pure speculation.

ATI has always been gung-ho on things that involve hardware vertex generation, such as clipping, higher-order surfaces, etc. It would not surprise me if this is because they actually have a geometry processor of sorts on their card, that can do those kinds of things. It would then make sense if this could also generate four vertices out of one point entirely on the hardware side.

Also, the ATI guard bands are substantially narrower than those of the NVIDIA hardware, which leads me to believe that ATI prefers to clip in geometry; again leaning towards ATI hardware having hardware geometry processing. (not just vertex shading)

But, like so many of my hunches, this is just speculation based on available information and could be entirely wrong. And the designers only seldom will throw me a bone. (hint hint :slight_smile: