Pixel shaders for GL?

This is probably offtopic, and I’m not sure anyone will be able to answer on a message board, but…

Does anyone know of any extension plans to implement pixel shaders in GL? NVidia have had a spec. for NV_Vertex_Program, a vertex shader extension, on their website since late last year, but no drivers support it yet. Any likelyhood of a NV_Pixel_Program (or ATI_Pixel_Program or *_Pixel_Program…). I am aware of the NV_Register_Combiners extension, but that’s not as flexible as a proper pixel shader implementation, as far as I know.

I ask for both practical and selfish reasons. The practical: we’re hoping to do some very high quality real-time stuff for the company I’m working for, and at present we have a functionality level pretty comparable to Q3s shaders. We have a mechanism for swapping between Direct3d and OpenGL, but we want to take the shader interface to the ‘next level’ (i.e. a renderman style language compiled down to pixel shaders), and I’d like to know how this functionality might manifest itself in GL.

The selfish reason: pixel shaders are cool, and I want to play with them

Cheers,

Henry

Nvidia hasn’t exposed its pixel shader because they dont have a card that supports hardware pixel shaders yet. You’ll have to live with the register combiners until the NV20.

“NVidia have had a spec. for NV_Vertex_Program, a vertex shader extension, on their website since late last year, but no drivers support it yet.”

I can’t use NV_Vertex_Program? I understood it is already possible to use it…

It doesn’t show up on my GeForce2 Ultra, with 6.31 drivers…a post on another mailing list from a guy at NVidia seemed to indicate that it wasn’t yet exposed.

Henry

I see…
and if you can answer another question (I don’t want to open a new thread):
What about DX8 shaders?
can’t you use them?

sorry, I forgot that DX shader are vertex shaders…

why did they expose vertex shaders for DX and not for OGL?

I understand that vertex shaders are accelerated in DX8 on GeForce class hardware. Pixel shaders aren’t, in my drivers (shame!). I heard rumours that later driver versions will expose that functionality - but this is just a rumour that I heard.

I’d really be happy if pixel-shader acceleration was possible on GeForce2 cards, I don’t know enough about the architecture to know if it’s going to happen. (Although register combiners suggest it may be plausible…)

Henry

well, it’s kinda wierd, because in the spec of NV_Veretx_Program I saw that the vertex program is already in Version 1.2, so I understood that it is already useable…

NV_vertex_program is in the leaked 7.17 drivers, but those are not official drivers, and nVidia WILL NOT help you if you have problems with them (they are unsupported). Furthermore, I thought I heard there were bugs in that version’s implementation of NV_vertex_program, so dont count on anything.

That being said, NV_vertex_program is a vertex shader rather than a pixel shader. The two are different in a number of ways. As the names imply, pixel shaders calculate a value per-pixel, whereas vertex shaders calculate per vertex and then interpolate. While pixel shaders need to have hardware support to work (and no hardware currently supports the full DX8 pixel shader spec), vertex shaders can actually be done in software reasonably fast for cards that dont have the necessary hardware T&L support.

In regards to DX8 pixel shaders versus register combiners, there is a tradeoff. Register combiners support signed math (which I find VERY handy). However, DX8 pixel shaders support dependant textures and MIGHT (Im not sure) support longer programs than the register combiners could.

Originally posted by Quaternion:
[b]sorry, I forgot that DX shader are vertex shaders…

why did they expose vertex shaders for DX and not for OGL?[/b]

Actually, DX8 supports both vertex shaders AND pixel shaders. As far as I can tell, NV_vertex_program is an exact match for DX8 vertex shader (though I dont know them both inside and out, so there may be differences Im not aware of). On the other hand, NV_register_combiners is about the closest you are going to get to DX8 pixel shaders in OpenGL. As stated in my last message, they each have their tradoffs. However, to put things in perspective, current hardware has NO support for DX8 pixel shaders so I guess that makes register combiners better today. Additionally, I expect (at least from nvidia) that once hardware supporting DX8 pixel shaders comes out, there will OpenGL extensions that support everything in the DX8 pixel shaders AND MORE.

Pixel shaders exists in OpenGL since a long time … on SGI machines ! See the OpenGL Shader on the sgi website.
Let’s hope they port this on NT/2K…

I’m pretty sure that nVidia is working on some variation of pixel shaders for OpenGL. In their vertex program specification they had a reference to a “NV_texture_shaders” extension. The reference in the spec disappeared after about a week.

Pretty much anything that shows up in DX will show up in OpenGL as an extension sooner or later.

j

The short answer is that there are extension specs forthcoming, but not until there’s hardware that implements the extensions.

As always, we’ll be supporting pretty much everything the hardware can do with OpenGL extensions.

Cass

But the hardware (GeForce) already CAN support the NV_Vertex_Program… when are you going to release it?

I have posted everyone I knew at nVidia about this GL_NV_vertex_program extension (actually, they probably got fed up with me: sorry guys !) and the answer has always been: it will be released sooner or later.

I remember Matt writing in this discussion forum that it shouldn’t be long (that was a while ago Matt ! )…

I always thought that GL_NV_vertex_program would be released for GeForce: we shouldn’t have to wait for the NV20 (vertex shaders are available on GeForce through DX8 after all !).

Now, by looking at nVidia’s web site, it seems clear that their priority (FOR THE MOMENT) is DX8 which is understandable: they just released Detonator 6.50 on their Registered Developers web site (the leaked version has been available for weeks…) but still no sign of 7.xx…

The problem, I should say, is that no one over there can (or is allowed to !) give us a release date for 7.xx…

AFAIK, the leaked 7.17 are bugged, 7.23 are fake, 7.27 are virtually non-existent…

Now, I know we would all appreciate an official statement on the status of GL_NV_vertex_program but it does not seem to be the way things work at nVidia…

I still pretend that there is no point in releasing the specs publicly if we cannot use the extension: one of the answers to this statement was: “some of our close developers have access to this extension”. OK, then just release the specs to those people ! Actually, there is one good thing to this early release: my “glext.h” is ready (it has been ready for 3-4 months now !).

I must say, I do not believe this thread will make anything happen but I am happy to see that I am not the only one who would like to see this extension released…

Best regards.

Eric

P.S.: Matt, Cass, this is not an attack against you !

[This message has been edited by Eric (edited 01-31-2001).]

Originally posted by Eric:
Now, by looking at nVidia’s web site, it seems clear that their priority (FOR THE MOMENT) is DX8 which is understandable:

Understandable…except for that fact that nvidia has separate teams for DirectX and OpenGL. I really dont know what the holdup is.

>I still pretend that there is no point in releasing the
>specs publicly if we cannot use the extension: one of
>the answers to this statement was: “some of our close
>developers have access to this extension”. OK, then
>just release the specs to those people ! Actually, there
>is one good thing to this early release: my “glext.h” is
>ready (it has been ready for 3-4 months now !).

it was frustrating that they released the spec without an implementation, i agree with you in that. but it’s also understandable – they probably wanted some developer input, and you can get that from someone who has just read through the spec.

… so give us the spec for NV_texture_shader too, NV

Originally posted by Siigron:
and you can get that from someone who has just read through the spec.

I do not completely agree: although you can point out some problems after reading the specs for a SIMPLE extension, it seems very difficult to realize that there could be a problem with such a big one…

Even if someone (and there are people like that ! ) could understand the whole thing by reading the specs, NOTHING can replace using the extension to discover that A or B could be done better.

That being said, it is my only complaint to nVidia after 2 years of developing on their hardware !

Best regards.

Eric

Originally posted by LordKronos:
I really dont know what the holdup is.

As far as I understand, people who are developing the drivers (at least the OGL ones, I have no contact with DX8 people !) have nothing to do with release dates (which seems quite normal !).

As with any company, this is probably the strategy of the Marketing or PR department… They probably feel like it is not that interesting to advertise on both fronts at the same time: if they did it, everybody would look at them for x weeks (DX8 and OGL at the same time) instead of twice x weeks (x for DX8 and x for OGL !).

Of course, this is just a funny guess, so do not take it too seriously…

Regards.

Eric

>But the hardware (GeForce) already CAN
>support the NV_Vertex_Program… when are
>you going to release it?

I believe the current GeForce2 hardware is
not sufficiently flexible and capable for
supporting the NV_vertex_program extension
in hardware. I believe the “leaked” drivers
emulate the program you hand it by using
3DNow! and/or SSE instructions where
possible. If they’re really good, they
actually compile/generate code when you hand
them the program, and set up a jump vector
when you bind the program.

The question I have is this: how smart is
the compiler? Will it recognize when I use
some “standard” T&L code at the end of the
program, and hand that work off to the GPU?
If so, what are the “standard” code snippets
I should use for them to be recognized?

These questions might of course be part of
the reason the extension isn’t public yet.

By the way: there’s a hole in the spec. The
spec says that vertexes are “provoked” when
you change attribute 0 (which is the vertex
location) but it says nothing about in which
order attributes are loaded into parameter
registers, so there’s theoretically a race
between a vertex being provoked and all the
supporting data being available when you use
glDrawElements() with more than one array
enabled.