NVidia: Where has GL_EXT_vertex_weighting gone?

The slow death of an extension :slight_smile:

In Detonator 41.xx the extension was exposed in the extension string.

In Detonator 45.xx, the extension was not exposed in the extension string, but the entry point was still there.

The new detonator driver 52.14 doesn’t support it. The entry point is 0.

From the specification:

Status

Discontinued.

NVIDIA no longer supports this extension in driver updates after November 2002. Instead, use either ARB_vertex_program & NV_vertex_program.

You are right, I should have looked there first.

http://www.flipcode.com/cgi-bin/msg.cgi?showThread=00009347&forum=3dtheory&id=-1

crossposting is the Evil.

Explain why you think posting the same question to more than one newsgroup is evil please.
Seems like the most logical thing to do.

But doesn’t this restrict a developer to only new cards? I only have GF256, GF2 DDR and a GF4mx… all crap I know in comparison to newer cards, but still, I bet there are many people still using them.

Many people may still be using them, but that doesn’t help NV to sell new parts. So I wouldn’t be surprised if they would rather not spend time to re-implement this extension in their drivers.

So am I likely to buy a new NV card with shallow marketing tricks like this? No, I don’t think so.

Vertex programs are supported by GeForce 256 and better (In software, but they still run well). So i guess also for those cards, the transition to use vertex programs should be possible.

[This message has been edited by flo (edited 10-22-2003).]

I have tested the emulated vertex shaders (at least with Detonator 45.xx) and they are not useful for me. I rather write my own 3Dnow/SSE code.

On an Athon 600 Mhz with GF2 MX and custom code, I can get 19 M point-lit triangles / s this way, something neither the hardware alone or the emulated software shaders will give me.

Originally posted by cschueler:
[b]
I have tested the emulated vertex shaders (at least with Detonator 45.xx) and they are not useful for me. I rather write my own 3Dnow/SSE code.

On an Athon 600 Mhz with GF2 MX and custom code, I can get 19 M point-lit triangles / s this way, something neither the hardware alone or the emulated software shaders will give me.[/b]

That’s the best route for general support, though SSE and a fallback CPU-only path may cover most hardware at this point.

A client of mine had the same issue. They benchmarked the old weighting extension at 100 cycles per vertex (2 matrix). The equivalent SSE code was less than 50 (and I was still learning SSE at the time).

Interestingly, they also decided to use the software skinning path for older ATIs as well.

Avi

[This message has been edited by Cyranose (edited 10-22-2003).]

I thought vertex programs in software where aviable also with TNT2 etc. And it is rather fast implementation.

Does the TNT2 have hardware T&L? If not, then I guess software emulated vertex programs certainly wouldn’t be much slower than transforms usually would be on that card.

This sucks. I had a demo that made EXTensive use of this extension and now I see it’s pretty much useless… I have no idea how to replace what I was doing with a shader (or I’d have used one to begin with…)
Damn indian givers, that’s it I refuse to update my nvidia drivers or hardware from now on (joking)…

There are definitely two schools of thought about deprecation of hardware features.

One school thinks that, no matter what, backwards compatibility should never be broken. Most people seem to be in this camp, based on the comments I’ve seen in this thread and the old one on paletted textures.

The other school thinks that old baggage should be discarded if the functionality is a subset of some new functionality. I know I’d rather have another programmable vertex shader in parallel than keep around the old fixed-function T&L. The drivers could build shaders on the fly to emulate old fixed-function.

I’m not sure yet which school of thought I subscribe to. I tend to lean towards the second because I want hardware to move forward as quickly as possible and not waste silicon on old functionality. On the other hand, as a programmer, I will probably be annoyed when some extension I have used in the past is no longer supported (I am dreading when register combiners go the way of the dodo).

I guess it seems like the best thing would be if the drivers continued to support old extensions by emulating them using new extensions (behind the scenes). Why not do this?

Anyway, just my 2 cents on the issue.

[This message has been edited by Zeno (edited 10-22-2003).]

I’m definately in the “out with the old, in with the new” camp. I don’t mind an extension surviving a few generations, but once superior functionality is avaliable, the old extension should be lost.

At the very least, functionality that was not very good to begin with, and was not widely used (EXT_vertex_weighting falls into this category) should be a prime candidate for removal. Sure, there are a few vertex weighting demos out there, but no actual product ever even considered using it. The extension didn’t expose decent functionality, and better functionality exists.

On an Athon 600 Mhz with GF2 MX and custom code, I can get 19 M point-lit triangles / s this way

With EXT_vertex_weighting? I highly doubt it. The size of your strips for any complicated model would be too small to effectively get around per-primitive and state-change overhead.

Interestingly, they also decided to use the software skinning path for older ATIs as well.

Not surprising. Hardware skinning hasn’t really become reasonably avaliable until the advent of vertex shaders. The vertex_blend extension did make a valiant attempt to provide for decent skinning, but vertex programs are the prefered and superior method. Vertex_blend was never supported by nVidia, and ATi was much smaller than they are now, so nobody bothered to use it. And now, we have vertex programs for our skinning needs.

I am dreading when register combiners to go the way of the dodo

RC’s are never going away; Doom3 supports them. Just like CVA’s, you’re never going to get rid of an extension that is (going to be) so widely used.

I guess it seems like the best thing would be if the drivers continued to support old extensions by emulating them using new extensions (behind the scenes). Why not do this?

To an extent, this is being done. However, for each extension that needs to be back-ported with new functionality, driver development time is wasted. I’d rather nVidia spend their time improving their fragment-program compiler than on back-porting EXT_vertex_weighting.

For ATi, this might be reasonable, because they already have a framework in their driver for building shaders for old functionality; they no longer have hardware fixed-function support, so they had no choice. nVidia still have various bits of hardware lying around, so they never had to write shader compiling code to do this kind of thing. For them, it would be a significant undertaking if any actual fixed-function hardware is removed.

I’m also in favor of suppressing older/deprecated extensions.

Try to be realistic, for a vendor it’s a nightmare to support everything, and to make sure everything is bug free. I’d rather have NVidia or ATI work on the new extensions, than loose their time trying to maintain old ones.

In addition, the OpenGL extensions mechanism has become a real mess. How many extensions are available at the moment? I’m sure we’re not very far from the hundredth. It makes the whole API a nightmare to maintain, with difficult dependencies between new and older extensions. I’d even dare to say that proprietary extensions should be dropped in favor of ARB ones, when possible. And when i mean “dropped”, i mean, completely removed from the driver/extension string.

I hate to say it, as i’m an OpenGL programmer at heart but… DX9 is much better in this area. Coding advanced effects in OpenGL is Tricky.

Y.

Two comments:

  1. This extension never performed very well, and wasn’t useful for much, so I don’t miss it.

  2. Saying that software vertex programs “run well” on a GeForce 2 is only true if the CPU is otherwise idle. The product I’m working on pushes enough polys and does enough physics and other things that a Pentium IV at 2.4 GHz is NOT ENOUGH to match a GeForce 2 MX and a Pentium III/800.

> With EXT_vertex_weighting? I highly doubt
> it. The size of your strips for any
> complicated model would be too small to
> effectively get around per-primitive
> and state-change overhead.

Of course I don’t get 19 M verts/s with vertex weighting, I meant the 19 M reference as example of how custom CPU code can actually be faster than shaders or hardware T&L (on older cards).

Of course I don’t get 19 M verts/s with vertex weighting, I meant the 19 M reference as example of how custom CPU code can actually be faster than shaders or hardware T&L (on older cards).

True though it may be, is your CPU doing anything but T&L? A GeForce 256 can get around 4-8M lit tris, but it frees up the CPU significantly.