ARB/ATI_vertex_blend specs

Hello,

Where I could find subj? Does anyone use this extension?

http://oss.sgi.com/projects/ogl-sample/registry/ARB/vertex_blend.txt

  • Matt

The ARB_vertex_blend spec can be found at
the OpenGL extension registry:
http://oss.sgi.com/projects/ogl-sample/registry/ARB/vertex_blend.txt

‘ATI_vertex_blend’ was deprecated after ARB promotion and no longer exists.

How is the ARB extension different from the EXT_vertex_weighting extension supported by nVidia?

Will nVidia be changing their drivers to support the ARB extension? Have they already? 8P

Thank ye,
– Jeff

[This message has been edited by Thaellin (edited 03-02-2001).]

The EXT_vertex_weighting extension only supports blending with 2 modelview matrices.
ARB_vertex_blend is a more general extension that supports a queryable number of modelview matrices. Additionally, ARB_vertex_blend introduces some other niceties like “weight sum unity” for the final weight.

[This message has been edited by dginsburg (edited 03-02-2001).]

We do not support the ARB extension today.

In the future, we may support ARB_vertex_blend on GeForce3. However, we consider the “fixed-function vertex blending” to be a relatively uninteresting feature that is obsoleted by vertex programs.

  • Matt

mcraighead, dginsburg

thanks a lot.

mcraighead,

But rumors say that ARB did not approve shaders concept, so there will be no ARB extension. Don’t you think that making a support for NV_vertex_program for vertex blending only is too complex way?

It seems to me that what mcraighead is saying
is that anyone who wants to do N-matrix
vertex blending has to write some vertex
shading code (or get some from the web) and
then set up his engine to initialize nVidia
cards with that code, and initialize cards
that support the ARB extension differently.

I would wish that all the common cases would
be supported without extra programmer effort
in nVidia drivers, though. That would make
living in a heterogenous world easier.

No, I’m saying that simple, say, 4-matrix blending is really not a great way of doing matrix blending – you need to break your model down into a bunch of tiny batches and change the matrices for each one.

Indexed blending makes a lot more sense. You can fit a lot of matrices into the constant registers, and you can also add in other effects easily – deformation, for example.

Will we support 4-matrix blending through the ARB extension eventually? Probably.

Right now, do we have more important things to work on? Certainly.

  • Matt

The problem is that you have a limited number
of constant registers. For high-quality
models, there are not enough (!) Also,
current and near-past hardware cannot run
vertex programs very efficiently, but do
have some hardware matrix blending as far as
I understand.

To qualify my claim of insufficiency of
matrices: models used in low-to-medium
animation work may have 40-50 joints (you get
into a lot of joints if you do fingers, hair,
etc). Supposing you need 8 constant registers
for T&L, and 3 registers per skinning matrix,
and you have 96 constant registers, this
gives you (96-8)/3 == an absolute max of
29 matrices (with one register to spare).

Sure, you can break the model into, say, head
and body and leftarm and rightarm, each of
which has less than 30 joints, but then you’re
breaking your model into many pieces anyway,
just not AS many as the 4-matrix skinning
case.

There is now a sample application which shows how to use ARB_vertex_blend .

-JasonM

Originally posted by JasonM [ATI]:
[b]There is now a sample application which shows how to use ARB_vertex_blend .

-JasonM[/b]

Hey Jason, I was playing around with that sample, and it get’s a little messed up when you toggle wireframe with hardware blending, but not with software blending. And PN Triangles has some weird effects on it too.

Just thought I’d let ya know.

Are ATI playing catchup? I sincerely hope not … ATI are fundamentally wrapped up in the current ‘who wins in the OpenGL wars’.

I personally hope that ATI beat nVidia for supremecy - which will mean that nVidia has to perform better than ever - as will ATI! It will suit everyone involved in the technology