I really think the specificiation for OpenGL 2.0 is overlooking the topic of vertex generation. Curved Surfaces, Patches & Particle type systems are all areas where a primative object is defined by a description of the geometry.
Dynamically generating vertex information from a description of the geometry has been around for a long time. Currently, Sony’s Playstation2 is the only platform that allows developers to define their own data format for geometry & then generate the vertex information on the VU core.
It is only a matter of time before PC Graphics cards support vertex creation on the GPU side & I really think OpenGL 2.0 should be looking ‘forward’ to this fact, rather than specifying for the features on cards right now.
Am I the only one that thinks this?
Let me know…
While it may only be “a matter of time”, the ARB has no idea how this will even be handled in hardware. Applying a high-level language to a programming environment still in its infant stages can easily make it virtually useless. Imagine wrapping NV10 Register combiners in a HLL. Also, if the ARB guesses wrong about how things will be implemented by the hardware, it could end up slowing things down more than speeding them up.
In any case, as the hardware looks to come together, the ARB can release a simple extension to cover this case.
Lastly, I’m not sure that such technology would actually be a benifit. The main reason for applying these technologies is performance. I don’t necessarily think that performance on these technologies will be increased by moving them off of the CPU.
Yup, I’ve been thinking about this a bit and there are some implications.
Lookup tables (aka textures) in vertex programs would allow you to send junk (or only an index) and make proper vertices.
Great, but what do you do if you want to update the lookups? For a particle system or virtually everything else, you want animation, so you need persistent state that changes over time.
You could respecify your ‘index to vertex’ lookup once per frame but that would (all things being equal) perform exactly the same as vertex arrays from AGP memory (ie slower than on-card geometry). Boom.
You also could generate vertices in the fragment pipeline by virtue of float render targets and some clever memory management. Write to NV_pixel_data_range and use the results NV_vertex_array_range or something. Sorry if I’m off here, not sure about how that would work. But the same thing applies here. The pixel pipes atm have the major advantage of already having access to lookups and being able to write to (temporary?) card memory. If you need to update lookups once per frame, it’s still useless.
So what would be really nice would be a way to do it without resorting to lookups. Hmmm. That’s quite tricky methinks, as long as you want to do real animation, and not just static stuff (with a free camera, if you will). You’d have to do some fancy fractal stuff to get decent amounts of geometry out of the limited number of constants you can pack onto the chip each frame. Lookups can of course double up as constant data but it’s all of very limited use.
walks off into the woods
Currently, graphics cards already support this in the form of curved surface tesselation. The gforce3 can tesselate B-Splines, Bezier patches, and Catmull-Rom splines. After tesselation, the vertex data is passed onto the vertex processor.
I have no idea how hard it would be to make the tesselator programmable. Perhaps it has already been tried. Perhaps the graphics card companies are already working on it.
I just hope that this issue has not been overlooked. The ability for an application to define its own data formats & build vertex information from that data is a very powerful feature, as Sony’s playstation2 demonstrates.
Currently, the OpenGL 2.0 pipeline looks like this ( taken from the 'spec ):
If tesselation is performed in hardware, then this stage is not shown in this pipeline, but would look like this:
Since hardware tesselation already exists ( vertex generation), being able to program it is the next step.