Pixel Shaders with CG

Originally posted by someone:
nvidia will not enable looping/branching until their hardware can support it, then they will enable it, even if other vendors aren’t able to support it.

Originally posted by Nutty:
and?!? Well you could say it’s not fair that GL2 will support features that other vendors can’t support in hardware. It’s the same situation. Will you call the ARB unfair, because gl2BlahBlahBlah is only implemented in hardware on certain cards?

The difference, obviously, is that OpenGL2 is ultimately controlled by the ARB, and the ARB doesn’t (in principle) benefit from favouring any particular company, whereas nvidia does.

However there aren’t any bonus points for fairness. Ultimately we just use whatever works best. The point is that evidence like this that nvidia is acting for the benefit of their own vested interests is evidence that they’re not acting for the graphics community as a whole, which is perhaps an indication of how well Cg will work for us in the future.

Ash

[This message has been edited by ash (edited 07-04-2002).]

Originally posted by knackered:
…and still you people refuse to consider direct3d - all I can say is you must know an awful lot of people running linux/irix! I know game developers don’t…

u will be very surprised, the number 1 game in the US at the moment is opengl only (neverwinter knights)
also a lot of top games this year have been opengl only, in fact 10-1 odds (u cant resist ) opengl only games have achieved the top position in the US more than d3d only ones.

knackered do u give in or should i say nee again?

First of all, the OpenGL vs D3D has no place in a discussion of Cg.

As far as Cg is concerned… in many ways it is the right way to go if nVidia pulls it off correctly. OpenGL 2.0 has one fatal flaw: it is utterly useless. Not only does it not exist yet, but when it does no hardware will support it. It will be 2 to 3 years before we see GL 2.0 in hardware.

I have a card in my computer that fully supports Cg at this very moment. When nVidia releases the GeForce 5, (once again, if nVidia does it right), all of my compiled Cg code will work just fine on it. I’ll download a new Cg compiler with new expanded functionality, but all the old Cg shaders will still compile. When the hardware is avaliable for it, Cg will provide GL 2.0 capabilities.

As long as the Cg compiler is fully backwards compatible (and provides the ability to compile somewhat complex shaders for older hardware to the extent that it is possible), Cg is a better solution than GL 2.0. Ultimately, the problem with GL 2.0 is that, while it is a nice future, it is a useless present and will not be very usable for the near future. Cg is here and mildly useful now; it will be here when the GL 2.0 shader is around, and it will still be useful.

As to the argument that nVidia is using Cg as a power-grab to reclaim the market… of course they are. They are caught between 2 organizations beyond their control: Microsoft and the ARB.

They can’t control what GL 2.0 is simply because everyone on the ARB competes with them. They want to bring the market leader down, so they will do everything in their power to make GL 2.0 as difficult as possible for nVidia to use.

At the same time, Microsoft benifits by having multiple graphics card vendors in a good position, which is why the 1.4 Pixel Shader was, basically, written by ATi (to offset the fact that the 1.1 PS was written by nVidia). D3D 9’s shaders don’t provide any side in the market with an advantage.

Trapped between GL 2.0 being out of their control and D3D 9 not providing them the advantage D3D 8 did, they have one option: make their own language. In a way, Cg is a lot like D3D’s shaders, only with nVidia in charge of the language. Also, it gives us, as users of the language, growing room, which is not what OpenGL 2.0’s shaders are designed for.

Once you provide conditional branching and looping constructs at both the vertex and pixel levels (and they could use the exact same interface), really, there’s nothing Cg couldn’t do that isn’t already part of the language.

Is this a blatently monopolistic move? Sure, since it is highly likely that none of their compeditors will be writing a Cg version. At the same time, there are worse companies who could monopolize the graphics card market. That’s one of the reasons I don’t mind that Microsoft has its monopolies: as long as they keep producing products I like to use and that are productive, I will continue to use them. And as long as nVidia continues to produce products that are of a high quality, I am willing to overlook their blatent power-grab.

As long as Cg is backwards compatible, and the language itself doesn’t change too much (as I said, the only additions are adding looping and conditional branching syntax), it should be a better alternative than the vaporware that is GL 2.0.

Originally posted by Korval:
First of all, the OpenGL vs D3D has no place in a discussion of Cg.

Yes it does. Cg is for d3d also, and it works very well on that API because it only has to communicate with a single interface, rather than lots. Cg for opengl is bad because it doesn’t…basically…work. Until an ARB pixel shader extension becomes available, it’s useless for any non-nvidia cards. In fact, until nvidia release a nvidia pixel shader profile, it’s useless even on their hardware! What are you using Cg for, Korval? Transforming your light vectors into tangent space, and then kicking in with nvparse?

It will be 2 to 3 years before we see GL 2.0 in hardware.

The hardware is available now - or didn’t you know? 3dLabs Wildcat VP.

I have a card in my computer that fully supports Cg at this very moment.

You have an nvidia card and you program in d3d then…

When nVidia releases the GeForce 5, (once again, if nVidia does it right), all of my compiled Cg code will work just fine on it.

That’s good for you. Meanwhile we’ll all be using the cut-down consumer 3dlabs cards, writing in gl2 shading language. 3dlabs have been bought by Creative, don’t you know?

Once you provide conditional branching and looping constructs at both the vertex and pixel levels (and they could use the exact same interface), really, there’s nothing Cg couldn’t do that isn’t already part of the language.

Ditto for OpenGL2.0 - difference being gl2 already has these crucial features.

Is this a blatently monopolistic move? Sure, since it is highly likely that none of their compeditors will be writing a Cg version. At the same time, there are worse companies who could monopolize the graphics card market. That’s one of the reasons I don’t mind that Microsoft has its monopolies: as long as they keep producing products I like to use and that are productive, I will continue to use them. And as long as nVidia continues to produce products that are of a high quality, I am willing to overlook their blatent power-grab.

Then that is very sad. Use d3d then.

The only thing that will stop me moving my attention fully towards d3d is opengl 2.0.
I’ve just finished reading the specs (and I’ve read the Cg specs too), and it looks like a developers dream.

Summary of my opinion on Cg: it is nice, but very limited, and controlled by a single vendor.

Sorry for sounding turse with you - but I’m still outraged at nvidia basically plagiarising a lot of 3dLabs work. Cg is a cut down version of the gl2 shader language.

[This message has been edited by knackered (edited 07-04-2002).]

Originally posted by Korval:
[b]First of all, the OpenGL vs D3D has no place in a discussion of Cg.

As far as Cg is concerned… in many ways it is the right way to go if nVidia pulls it off correctly. OpenGL 2.0 has one fatal flaw: it is utterly useless. Not only does it not exist yet, but when it does no hardware will support it. It will be 2 to 3 years before we see GL 2.0 in hardware.

I have a card in my computer that fully supports Cg at this very moment. When nVidia releases the GeForce 5, (once again, if nVidia does it right), all of my compiled Cg code will work just fine on it. I’ll download a new Cg compiler with new expanded functionality, but all the old Cg shaders will still compile. When the hardware is avaliable for it, Cg will provide GL 2.0 capabilities.

As long as the Cg compiler is fully backwards compatible (and provides the ability to compile somewhat complex shaders for older hardware to the extent that it is possible), Cg is a better solution than GL 2.0. Ultimately, the problem with GL 2.0 is that, while it is a nice future, it is a useless present and will not be very usable for the near future. Cg is here and mildly useful now; it will be here when the GL 2.0 shader is around, and it will still be useful.

As to the argument that nVidia is using Cg as a power-grab to reclaim the market… of course they are. They are caught between 2 organizations beyond their control: Microsoft and the ARB.

They can’t control what GL 2.0 is simply because everyone on the ARB competes with them. They want to bring the market leader down, so they will do everything in their power to make GL 2.0 as difficult as possible for nVidia to use.

At the same time, Microsoft benifits by having multiple graphics card vendors in a good position, which is why the 1.4 Pixel Shader was, basically, written by ATi (to offset the fact that the 1.1 PS was written by nVidia). D3D 9’s shaders don’t provide any side in the market with an advantage.

Trapped between GL 2.0 being out of their control and D3D 9 not providing them the advantage D3D 8 did, they have one option: make their own language. In a way, Cg is a lot like D3D’s shaders, only with nVidia in charge of the language. Also, it gives us, as users of the language, growing room, which is not what OpenGL 2.0’s shaders are designed for.

Once you provide conditional branching and looping constructs at both the vertex and pixel levels (and they could use the exact same interface), really, there’s nothing Cg couldn’t do that isn’t already part of the language.

Is this a blatently monopolistic move? Sure, since it is highly likely that none of their compeditors will be writing a Cg version. At the same time, there are worse companies who could monopolize the graphics card market. That’s one of the reasons I don’t mind that Microsoft has its monopolies: as long as they keep producing products I like to use and that are productive, I will continue to use them. And as long as nVidia continues to produce products that are of a high quality, I am willing to overlook their blatent power-grab.

As long as Cg is backwards compatible, and the language itself doesn’t change too much (as I said, the only additions are adding looping and conditional branching syntax), it should be a better alternative than the vaporware that is GL 2.0.[/b]

You claim that “OpenGL 2.0 is not existing yet” and “no useful”. Well, our software is supporting OpenGL 2.0 shaders on the real existing hardware (P10).

OpenGL 2.0 has the aim to set a standard and visions for the future instead of only reflecting existing hardware. Because of this, the OpenGL 2.0 functionality is a superset of all other existing shader languages. But of course, at any time, existing hardware easily can support only parts of OpenGL 2.0, so that OpenGL 2.0 is useful both today and tomorrow.

OpenGL 2.0 also has the aim to be an open standard across all hardware vendors including NVidia, ATI, 3dlabs, Matrox, etc. etc. etc.

I think it is worth pushing these aims and so pushing OpenGL 2.0.

Korval,

  1. OpenGL2 is not vaporware. It has support from multiple hardware vendors, and tons of
    ISVs.

  2. A lot of the functionality written in the OpenGL2 white papers has been implemented for the Wildcat VP as extensions to OpenGL 1.3. Real ISVs are using that today, creating some amazing visual effects.

  3. The proposed direction for OpenGL2 encompasses much more than just a shading language. Again, see the white papers. Some other majorly important aspects are better memory management, more efficient data movement, and more control over synchronization for the application.

In contrast, Cg is only a shading language with less functionality than our GL2 extensions offer today (for example, a full fragment shader with looping and conditionals).

Barthold
3Dlabs

How many gamers have P10 based cards in their PC’s?

Probably less than a handfull.

How many have DX8 compatible cards? Quite alot.

Which HLSL runs on DX8 compatible hardware now? Cg. Not GL2.

In the future GL2 will probably make more sense. But for some apparent reason ppl have got it into their heads that just because NV want developers to use Cg now, that it is going to destroy all the plans of GL2. What utter nonsense.

What about the fact that NV claim Cg is compatible with the DX9 shading language? IF this is true, then for developers supporting gl and DX9, they need only 1 set of shaders. Not 2 if they wanted to use DX9 HLSL and GL2’s HLSL.

I simply dont understand what the big deal is. GL2 is an api, that happens to have a C based shading language, and for consumer class hardware it is no-where near ready. Cg is for developers to use now. To make writing vertex/fragment programs easier. Yes we’re all aware that fragment programs aren’t yet available for GL, but this hopefully will be remedied soon. IF other companies such as ATI, matrox etc… dont get involved, then it’s only going to hurt themselves in the long run. Unless they suddenly release GL2 or their own shading language which eclipses Cg.

IMHO.

Nutty

Some corrections of wrong statements:

Originally posted by Korval:
They can’t control what GL 2.0 is simply because everyone on the ARB competes with them. They want to bring the market leader down, so they will do everything in their power to make GL 2.0 as difficult as possible for nVidia to use.

Obviously nonsense. Why should it be difficult for NVidia to use OpenGL 2.0? (Do you believe in aliens preventing NVidia from using the OpenGL 2.0 shader language? :wink:

In contrary, NVidia always says that they will support OpenGL 2.0 as soon as it is approved as standard by the ARB.

Trapped between GL 2.0 being out of their control and D3D 9 not providing them the advantage D3D 8 did, they have one option: make their own language.

Obviously wrong facts. The D3D9 shader language is the same as Cg. So NVidia didn’t make their own language. Only their own implementation.

In fact, NVidia and Microsoft worked together for Cg == DX9 SL. This is the opposite of being trapped.

First of all, I don’t think Cg is so bad. It is a great move to make writing shaders easier for developers, nvparse cubed or something along those lines. However, the OpenGL 2 shading language is more forward looking and aims higher which I think is a good thing. The api playing catch up with the HW is what’s been plagueing OpenGL for a few years now, and it should stop. However, while the shading language is an important feature of OpenGL 2.0 there’s other stuff that I see no reasons why vendors wouldn’t support. The superior memory/data management and better synchronisation are great, and really needed. The idea of a “pure” OpenGL with more functions moved to glu also appeals to me, I’ve never understood what stuff like edge flags and selection are doing in the api anyway. I honestly don’t care what shading language I use as long as it is compatible across multiple vendors and I get to use the other good stuff from OpenGL 2.0 And please, no assembler level shading interfaces to shading. I’ve seen this advocated by several people at nvidia but I’ve yet to see any benefits listed other than “the API should expose lower level interfaces, Cg is the high level interface”.

Originally posted by Nutty:
[b]

And 2ndly I’m not refusing to consider D3D. Again using Cg has benefits there, as provided suitable fragment program profiles come out, you can use the exact same Cg shader for D3D code and OpenGL.
Nutty[/b]

This is a suitable time to comment on another Cg ‘unique selling point’ - you can run the same shader on OGL and DX.

How could anyone not buy into this story - paradise - write once run anywhere. Worked for Sun so why shouldn’t it work for nvidia with Cg?

How much effort goes into the shader part of a game/app and now much goes into the rest of the 3D part? Even if the shader part were portable between OGL and DX the remainder of the 3D segment of program is highly non portable and conveniently overlooked.

Take Doom 3 as a topical case in point. Fantastic graphics, but all achieved with one 20 or so line fragment OGL2 shader. Needless to say most of the effort in getting the superb lighting effects, etc. are in the 99% of the graphics code not in the fragment shader. How much porting effort to DX would have been saved by writing in Cg? 5 minutes, and that is being generous.

This is based on the current snapshot of Doom 3 and other games will no doubt use more shaders of higher complexity (I hope otherwise I might as well hang up my architects hat) but I still assert that the true value of shader portability between two otherwise very different APIs has very little real value or benefit. Great marketing story though.

Dave.
3Dlabs

Originally posted by folker:
[b]Some corrections of wrong statements:

Obviously wrong facts. The D3D9 shader language is the same as Cg. So NVidia didn’t make their own language. Only their own implementation.

In fact, NVidia and Microsoft worked together for Cg == DX9 SL. This is the opposite of being trapped.[/b]

If my book for Cg == DX9 means that a program written for Cg compiles unchanged on DX9 HLL.
Comparing both specs this is clearly not the case. Cg has types not present in DX9 HLL, DX9 HLL has loops just to name two blatant differences! Unfortunately the DX9 HLL spec is under NDA so I cannot go into any more details.

I asked Microsoft about this as well and their comment was nvidia are portraying more collaboration than there has been and they don’t consider the languages the same and have no intension of supporting nvidia in this endeavor.

Dave.
3Dlabs

I am sure the next gen NVIDIA will be fully DX9 compliant.

Currently, we have the following situation regarding Cg:

If you use DX8: For NVidia hardware, you can use Cg. To use features of other hardware (e.g. ATI) you must use assembly vertex/pixel programs (because Cg only supports features of NVidia cards).

If you use DX9: You can use the DX9 shader language for all hardware. But to use different features of different hardware, you have to write different shaders.

If you use OpenGL: For NVidia hardware, you can use Cg. To use features of different hardware (e.g. ATI) you must use ATI extensions etc. etc.

So all in all, Cg is indeed very useful: Cg is a powerful new “nvparse” for NVidia hardware making it much easier in many situations to write shaders instead of using an ugly assembly language. No question. And since NVidia has a big market power, it is automatically somehow a standard (in the same way as many game companies support nv extensions, or in the same way as
many game companies used Glide some years ago). And this is all fine, and so Cg is a good thing making live easier, no question.

OpenGL 2.0 has different aims which are very much worth pushing: A future-oriented, open shader language standard for all hardware vendors.

It is very likely that basically all hardware vendors will use the OpenGL 2.0 shader language, because the current situation of having a jungle of properitary OpenGL extensions clearly reached its limits, and the demand for a standard shader language for OpenGL is growing every day. And OpenGL 2.0 shaders are the (only) open shader standard solving this problem.

And indeed, nearly all hardware vendors are stongly supporting OpenGL 2.0. And really a lot of software vendors.

So we will se a lot of OpenGL 2.0 support for many hardware vendors in near future. The Wildcat VP from 3dlabs is only the beginning…

Originally posted by Dave Baldwin:
[b]
This is a suitable time to comment on another Cg ‘unique selling point’ - you can run the same shader on OGL and DX.

How could anyone not buy into this story - paradise - write once run anywhere. Worked for Sun so why shouldn’t it work for nvidia with Cg?

How much effort goes into the shader part of a game/app and now much goes into the rest of the 3D part? Even if the shader part were portable between OGL and DX the remainder of the 3D segment of program is highly non portable and conveniently overlooked.

Take Doom 3 as a topical case in point. Fantastic graphics, but all achieved with one 20 or so line fragment OGL2 shader. Needless to say most of the effort in getting the superb lighting effects, etc. are in the 99% of the graphics code not in the fragment shader. How much porting effort to DX would have been saved by writing in Cg? 5 minutes, and that is being generous.

This is based on the current snapshot of Doom 3 and other games will no doubt use more shaders of higher complexity (I hope otherwise I might as well hang up my architects hat) but I still assert that the true value of shader portability between two otherwise very different APIs has very little real value or benefit. Great marketing story though.

Dave.
3Dlabs

[/b]

Doom 3 wouldn’t be possible with Cg, because (in contrast to nvparse) Cg does not expose the same power as assembly and combiner shaders required by Doom 3.

Not talking that you cannot use Cg to implement the ATI code path for example.

Doom 3 is very easily possible with OpenGL 2.0 as Carmack demonstrated. One reason is that the P10 fragment shaders are much more flexible, and that OpenGL 2.0 easily exposes all the features required for Doom 3.

And as soon as another hardware vendor provides OpenGL 2.0 support (e.g. ATI?), Doom 3 will automatically run on this hardware using OpenGL 2.0 without having to write a new code path.

Originally posted by Nutty:
[b]How many gamers have P10 based cards in their PC’s?

Probably less than a handfull.

How many have DX8 compatible cards? Quite alot.[/b]

Well then write shaders using nvparse/ati fragment shaders/vertex programs for those cards - dx8 compatible cards aren’t capable of supporting complex shaders anyway, so writing asm shaders for them is easy…you of all people should know that Nutty. Cg is not going to help you here…as I say, until the ARB release a general pixel shader extension which is compatible with DX8 cards.
For the gl2 compatible cards, use the gl2 path…and write wholly better looking shaders.

I’m not really worried about what gamers have - I write simulations, and I decide what hardware to use!

I don’t mind reading posts from 3DLabs commenting on Cg but then I’d also like to hear people from NVIDIA (and ATI/Matrox) giving their opinion…

Anyway, I cannot see what the big deal is. Nutty is completely right: GL2.0 is useless right now, at least for the kind of work he’s doing (Game Dev). No gamer has a P10 and I don’t think this is about to change.

Now, for research work, I’d love to get my hands on GL2.0-enabled hardware but I can tell you I WON’T use GL2.0 in my apps right now (and I am not in the gaming industry just in case you were wondering).

Cg can be useful right now for anyone who targets NV products in general. It is up to other vendors to provide a profile for their own card. As Nutty said, if they don’t then NVIDIA will lose some money (mind you they’re loaded…) but if they do then it might be good for us while patiently waiting for GL2.0-CONSUMER-cards.

I have nothing against 3DLabs at all but the fact that they have a product that supports some features of GL2.0 is not enough for developers to use GL2.0 ! Typically managers will look at the user base before deciding on that and I am afraid the P10 is not exactly what I call “widely used” these days…

Anyway, I hope GL2.0 becomes a reality soon but meanwhile I’ll have a go at Cg and why not even try DX9…

Regards.

Eric

now:
use cg
tomorrow:
use gl2

why?
one is nvidia only
the other is arb-gl

nvidia just tries to get its own hands in this.

Some of you have been espousing Cg as a ‘now’ thing, and GL2 (eric, at least) as ‘useless’. I’d have a different label for Cg: niche.

In order for Cg to be truly useful, multiple vendors would need to provide ‘profiles’ for their hardware. nVidia has tried to make this sound like the most reasonable thing in the world. What would the effect be on a competitor, though?

Say ‘SpiffyGFX’ is working on their next gen product. nVidia spews forth Cg as the savior of graphics programming. Now, at sgfx, to support this language, I need to pull developers and resources off my existing (mainstream) projects to write a profile for this thing. The net effect being a slow-down on my development cycle in order to support an unproven technology with uncertain impact and an unknown future. This /alone/ is enough to make me choose not to bother developing a CG profile.

Now, say I have extra developers twiddling their thumbs and I put them on the project. What advantage do I gain? By endorsing a competitor’s development efforts, I not only lend credibility to that competitor, but I put myself in a catch-up position. As the sole proprietor of the language specification, nVidia is the only company with a native ability to fully support the ‘latest’ specification. Given the build-up prior to Cg’s release (none), one can assume future specification updates will be similarly launched out of the blue. Note that nVidia really has little option on this point - if they publish the specification early, their competitors will know what nVidia’s hardware plans are ahead of time.

So, we’ll go one step further, and say my hardware development company has signed all the necessary paperwork to get information about this thing in time to implement it before it hits market. We’ll even assume that nVidia plays nice and adheres to the advance specification even if their QA team turns up a hardware bug which makes it hard to support a new feature. Now I have a direct competitor specifying what I can support. If nVidia chooses to add some wacky event-driven scheme with callbacks and volume rendering, then I either get it in hardware or publish a card which gets reviews like “Supports CG, but the implementation sucks compared to the GeForceWallaWalla card.”

This works the other way, too. Say my new card design has holy-grail-quality support for higher-order surfaces, but nVidia’s doesn’t. Guess what feature will NOT be in the Cg language?

So, unless nVidia develops the profiles FOR them, there will be no profiles for non-nVidia cards. So I (back in software developer land here) can support Cg on nVidia cards, but have to write special support in “Other API” for other cards. Or I could choose to just write the whole thing in “Other API” and save development dollars while still producing a kick-ass title.

In regards to OpenGL2.0 as ‘useless’, I find it amazing to think how quickly people have forgotten the OpenGL1.x development path. Until 2000 (or so), you could NOT count on a consumer-level card implementing thr full OpenGL path in hardware. It would have been a foolish assumption, and your game would have run like crap. Even today, you can’t write general case OpenGL and assume it will run well - there is a fast path for each card, and straying from it bring penalties.

OpenGL2.0 will be no different in this respect. Some things will be fast, some things will be slow. Some features will force a partial software fallback, others a total software fallback. Some features will already be in hardware. and others will be implemented as soon as IHVs see an ISV demand for them. In short, your application optimization process will not be changed much, though the design process will. A well-designed application will also scale nicely across newer hardware while still performing well on the ‘old’ stuff we have available today.

Thanks for actually reading all that,
– Jeff Threadkiller

Originally posted by Eric:
I don’t mind reading posts from 3DLabs commenting on Cg but then I’d also like to hear people from NVIDIA (and ATI/Matrox) giving their opinion…

Both NVidia and 3dlabs are commenting heavily both Cg and OpenGL 2.0 in this discussion board (see for example the Carmack thread). They post immediately if they think something is worth saying… :wink:

Anyway, I cannot see what the big deal is. Nutty is completely right: GL2.0 is useless right now, at least for the kind of work he’s doing (Game Dev). No gamer has a P10 and I don’t think this is about to change.

OpenGL 2.0 is not only for the P10 board.
OpenGL 2.0 is an open standard, and there will be soon consumer cards supporting OpenGL 2.0.

[b]
Now, for research work, I’d love to get my hands on GL2.0-enabled hardware but I can tell you I WON’T use GL2.0 in my apps right now (and I am not in the gaming industry just in case you were wondering).

[/b]

From the perspective of software vendors (especially middleware), you anyway have to support everything: OpenGL 2.0 because it will be a standard, Cg because NVidia hardware is important, DX9 HLSL because it will be a standard for DX9 etc.

(Our philosophy is to support everything which is important. Both OpenGL 2.0 shaders and Cg definitely are important.)

[b]
Cg can be useful right now for anyone who targets NV products in general.

[/b]

Cg is not ready for being used in practise. No fragment shader suppport in OpenGL Cg, calling the Cg compiler exe at application runtime, some other minor problems, etc. We made some ugly experiences with the current Cg implementation in practise… But of course, NVidia is working on that, and this will also change in future, no question.

[b]
It is up to other vendors to provide a profile for their own card.

[/b]

To be realistic, no competiting hardware vendor (e.g. ATI) will write Cg profiles, no question. They will use OpenGL 2.0 shaders.

[b]
As Nutty said, if they don’t then NVIDIA will lose some money (mind you they’re loaded…) but if they do then it might be good for us while patiently waiting for GL2.0-CONSUMER-cards.

[/b]

NVidia won’t loose money. Cg is very useful also if it is only supported by NVidia and only supports NVidia hardware. In the same way as nvparse.

[b]
I have nothing against 3DLabs at all but the fact that they have a product that supports some features of GL2.0 is not enough for developers to use GL2.0 ! Typically managers will look at the user base before deciding on that and I am afraid the P10 is not exactly what I call “widely used” these days…

[/b]

The strong argument for OpenGL 2.0 is that it will be a standard accross hardware vendors.

And for software developers it is important to look into the future. If you don’t start developing for the future early enough, you soon get problems of being behind.

Thaellin, folker,

I thought I made myself clear when saying that I was waiting for GL2.0 and that in my opinion Cg would probably only be used during a transition period.

Now, you both seem to think that GL2.0 is there already.

Can you then answer this simple question: how do I start using GL2.0 on my GF4? OK, bad question, this is an NVIDIA card!!! So, apart from the Wildcat VP, which graphics card can I buy this afternoon which will provide me with GL2.0 drivers?



That’s what I thought…

Now, if as a developer I can not get my hands on such a card at the local PC World (kidding), how the hell will my clients do???

I still believe that it is too soon for GL2.0 to be used in real apps, that’s all. And because of that, I think Cg is an acceptable alternative for (NV) cards when you are planning to create shaders (let’s not forget that it is actually stupid to compare GL2.0 to Cg, like it is to compare DirectX to GL!)…

One thing I forgot to mention is that what I am really waiting for at the moment is GL1.4. Of course GL2.0 sounds nice and cool but I honestly think it is still too far away.

About the “OpenGL 2.0 will be a standard…”, I agree. But then again, can you name a lot of people that would decide today to go for GL2.0 only?

Anyway, perhaps I’ll be proven totally wrong (I don’t mind) and we’ll see GL2.0-enabled cards & software by XMas… but I doubt it !

Regards.

Eric

[This message has been edited by Eric (edited 07-05-2002).]