I’m definately in the “out with the old, in with the new” camp. I don’t mind an extension surviving a few generations, but once superior functionality is avaliable, the old extension should be lost.
At the very least, functionality that was not very good to begin with, and was not widely used (EXT_vertex_weighting falls into this category) should be a prime candidate for removal. Sure, there are a few vertex weighting demos out there, but no actual product ever even considered using it. The extension didn’t expose decent functionality, and better functionality exists.
On an Athon 600 Mhz with GF2 MX and custom code, I can get 19 M point-lit triangles / s this way
With EXT_vertex_weighting? I highly doubt it. The size of your strips for any complicated model would be too small to effectively get around per-primitive and state-change overhead.
Interestingly, they also decided to use the software skinning path for older ATIs as well.
Not surprising. Hardware skinning hasn’t really become reasonably avaliable until the advent of vertex shaders. The vertex_blend extension did make a valiant attempt to provide for decent skinning, but vertex programs are the prefered and superior method. Vertex_blend was never supported by nVidia, and ATi was much smaller than they are now, so nobody bothered to use it. And now, we have vertex programs for our skinning needs.
I am dreading when register combiners to go the way of the dodo
RC’s are never going away; Doom3 supports them. Just like CVA’s, you’re never going to get rid of an extension that is (going to be) so widely used.
I guess it seems like the best thing would be if the drivers continued to support old extensions by emulating them using new extensions (behind the scenes). Why not do this?
To an extent, this is being done. However, for each extension that needs to be back-ported with new functionality, driver development time is wasted. I’d rather nVidia spend their time improving their fragment-program compiler than on back-porting EXT_vertex_weighting.
For ATi, this might be reasonable, because they already have a framework in their driver for building shaders for old functionality; they no longer have hardware fixed-function support, so they had no choice. nVidia still have various bits of hardware lying around, so they never had to write shader compiling code to do this kind of thing. For them, it would be a significant undertaking if any actual fixed-function hardware is removed.