Pixel shaders for GL?

A couple of answers:

  1. We didn’t pick the date that DX8 info went “live” - but once it did, it was appropriate for us to go live with our DX8 demos and effects.

  2. Releasing our upcoming OpenGL extensions and tech demos early exposes a lot about our upcoming hardware. That is why this information is not released early.

  3. bgl, I’m not sure what you’re talking about with regard to provoking attributes in vertex arrays. There’s no race condition under any circumstances with NV_vertex_program. All the vertex attributes are loaded before the program is invoked when using vertex arrays. In immediate mode, the attributes simply a snapshot of the current settings when the provoking attribute (0) is set.

Hope this helps…
Cass

The DX8 vertex shaders run in hardware on my GeForce2. Seeing how OpenGL vertex shaders are very similar (identical functionality?), I think that they would run in hardware as well.

j

Cass: I meant not the static (per invocation)
attributes, but the per-vertex data arrays.
The spec says that changing the value of
argument 0 provokes the program, but the
wording about when the other vertex data is
updated is loose enough that a conforming
but broken implementation could conceivably
update argument 0 FIRST, and then the other
arguments out of the argument arrays, resulting
in early provocation. I’m sure nobody in their
right mind would actually design hardware or
a driver that did that, but it seems like a
hole nevertheless.

2 j
Are you sure that you card does realy support
vertex shaders? I think that only NV20 is able to support vertex shaders in HW.

Originally posted by IronPeter:
2 j
Are you sure that you card does realy support
vertex shaders? I think that only NV20 is able to support vertex shaders in HW.

OK, as far as I can tell, DirectX 8 vertex shaders and NV_vertex_program are exactly the same. Keeping that in mind, Richard Huddy of nvidia said:

the existing GeForce
family of chips do not have any hardware support for either Vertex Shaders
or Pixel Shaders. [That’s GeForce256, GeForce2, GeForce2 Ultra and GeForce2
MX].

I think that means no hardware support for vertex programs until NV20

[This message has been edited by LordKronos (edited 02-03-2001).]

Originally posted by bgl:
Cass: I meant not the static (per invocation)
attributes, but the per-vertex data arrays.
The spec says that changing the value of
argument 0 provokes the program, but the
wording about when the other vertex data is
updated is loose enough that a conforming
but broken implementation could conceivably
update argument 0 FIRST, and then the other
arguments out of the argument arrays, resulting
in early provocation. I’m sure nobody in their
right mind would actually design hardware or
a driver that did that, but it seems like a
hole nevertheless.

bgl, I’m not sure what problem you see with the wording. The only possible problem would occur in immediate mode, as calls to glDrawArrays, glDrawElements or even glArrayElement inherently keep things “in sync”. So, in immediate mode, there’s always the concept of “current data”, whether that applies to vertex, normal, color, texcoord or whatever. You make a call to glColor3fv, it sets the current color. You make a call to glNormal3fv, it sets the current normal, and so on. Same goes for the vertex attributes. You call glVertexAttrib3fNV(1, 1., 1., 1.) and it merely sets the current value for the index 1 vertex attribute, with no invocation of the vertex program. In fact, it’ll do this even if the vertex program mode is disabled. Everything “syncs” when a glVertex call is made (like glVertex3fv) or when a glVertexAttrib call is made with an index of 0. Then, the vertex program is invoked (assuming the vertex program mode is enabled, a valid program is bound, etc.) with the “current” vertex attributes used for arguments.

When using vertex programs with immediate mode (which, BTW, is ill-advised for performance purposes), always call the index 0 vertex attribute last, like so:

glBegin(GL_TRIANGLES);
glVertexAttrib3fv(1, u[0]);
glVertexAttrib3fv(0, v[0]);
glVertexAttrib3fv(1, u[1]);
glVertexAttrib3fv(0, v[1]);
glVertexAttrib3fv(1, u[2]);
glVertexAttrib3fv(0, v[2]);
glEnd();

It may look backwards, but that’s the only way it’ll work correctly.

Well, if you argue that you could put the attrib 0 updates before the updates of attrib 1, you could equally well argue that when using standard GL vertex arrays, you could put the Vertex call before calls to Color, TexCoord, etc.

Vertex attrib 0 is the vertex. Vertex attrib 2 is the color. And so on.

So clearly, when using vertex arrays, vertex attrib 2 will be specified before vertex attrib 0.

  • Matt

the existing GeForce
family of chips do not have any hardware support for either Vertex Shaders
or Pixel Shaders. [That’s GeForce256, GeForce2, GeForce2 Ultra and GeForce2
MX].

Oops, my mistake. When I ran the DX8 demos from nVidia, the vertex programs never had to resort to the reference software drivers, so I assumed that the programs were run in software.

I just realized that DX uses either a software or hardware T&L pipeline, and says that it is accelerated no matter what.

So yes, the vertex programs are run in software. Speedwise, they seemed fine, though.

j

j, your post is kind of confusing.
There is or the isn’t hardware support for the Vertex Program?

sorry j, I read your post again. So there is NO hardware support for the vertex shaders/program yet.

But when you say there is no hardware support, you mean that EVERY instruction in the vertex program will run in software?

Originally posted by cass:

2) Releasing our upcoming OpenGL extensions and tech demos early exposes a lot about our upcoming hardware. That is why this information is not released early.

I was re-reading the thread and realized I missed this one: Cass, do you mean that you did not release the specs for GL_NV_vertex_program VERY early ???

Or do you mean that releasing the specs is harmless compared to releasing a driver/tech demo ? (when I say harmless, I mean harmless for the secret around your next chip !).

Anyway, if this extension will only be released at the same time than NV20 (or whatever you call it: I must say, MS is much better than nVidia for names… Whistler, Blackcombe, … ), why not telling us ???

Regards.

Eric

[This message has been edited by Eric (edited 02-05-2001).]

But when you say there is no hardware support, you mean that EVERY instruction in the vertex program will run in software?

I’m not a huge expert or anything, but I think that right now, vertex programs are done completely in software. The card is still used for rasterization, of course.

It would probably be too hard/inneficient to have a half hardware, half software implementation.

j

ATI is indeed working on a pixel shader extension spec. It will most likely not be released before any hardware that implements it since a software implementation of pixel shader is only marginally useful.

dave

Originally posted by paddy:
Pixel shaders exists in OpenGL since a long time … on SGI machines ! See the OpenGL Shader on the sgi website.
Let’s hope they port this on NT/2K…

That’s a different story. They use standard OpenGL and multipass (LOTS of passes) to get the effects. Given floating point framebuffers and the pixel_texture extension they can do full Renderman, which is pretty cool. But the sheer number of passes for non-trivial shaders make it a lot less useful for the realtime crowd, IMHO.

Originally posted by mcraighead:
[b]So clearly, when using vertex arrays, vertex attrib 2 will be specified before vertex attrib 0.

  • Matt[/b]

Where in the spec does it say that?
This was exactly what I was getting at.
(For array mode, not immediate mode).

Someone else also said that array mode
inherently keeps things “in sync”. I
have not been able to trace that claim
back to the spec, either.

[This message has been edited by bgl (edited 02-07-2001).]

See page 23 of the OGL 1.2.1 spec.

For each enabled array, it is as though the corresponding command from sec-
tion 2.7 or section 2.6.2 were called with a pointer to element i. For the ver-
tex array, the corresponding command is Vertex[size][type]v, where size is
one of [2,3,4], and type is one of [s,i,f,d], corresponding to array types short,
int, float, and double respectively. The corresponding commands for
the edge ag, texture coordinate, color, color index, and normal arrays are
EdgeFlagv, TexCoord[size][type]v, Color[size][type]v, Index[type]v,
and Normal[type]v, respectively. If the vertex array is enabled, it is as
though Vertex[size][type]v is executed last, after the executions of the
other corresponding commands.

Mark Kilgard has already added language to the latest version of the spec that clarifies the point about attrib 0 being last with vertex arrays further.

Note that this behavior matches that of VertexAttribs, which specifies the provided attributes in reverse order.

  • Matt

>If the vertex array is enabled, it is as
>though Vertex[size][type]v is executed last

Yay!

Now I can sleep again :slight_smile: