Nvidia Cg toolkit

After reading through some more stuff, I think my second post hit it on the head. The main advantage is the high level aspect of things rather than assembly. Looking through some of the testimonials from developers that have been working with Cg, half of them go something like this:
“Right now we have 1 or 2 people that can read and write shaders, but with Cg everyone will be able to work with it”

Of course, thats a bit optimistic of a statement. There still is the high bar of having to understand a lot of the math that goes on. Then again, if one developer write some Cg subroutines for transforming into tangent space and so on, then all that the other developers need to know is that they need to transform into tangent space. The wont need to know how to do it, and they wont need to worry about take the assembly instructions someone else provided and mixing those in with other instructions, and worrying about temp register collision, etc.

Now another thing is that this is Cg 1.0, and I read that they will be shipping Cg 2.0 with the launch of the NV-30. Why is this? If they are just upgrading it to add an NV-30 profile, thats well and fine. But if they will be adding new features to the language to support the NV-30, then things are entirely wrong and its not likely other IHVs will adopt Cg because it will be too would favor nvidia too much. Basicly, the only way I see it succeeding as an “industry standard” will be if it is fully featured enough in version 1 to support several years worth of graphics cards.

That said, even if it doesnt become industry standard, at least it will ease development for the nvidia side of things, having one language for openGL 1.4/2.0 and DirectX 8/9.

One thing I noticed though…I read the specification and it mentions the “fp20 [profile] for compiling fragment programs to NV2X’s OpenGL API”. Then the spec goes on to give detailed descriptions of directx8 vertex shaders, directx8 pixel shaders, and opengl vertex programs. No detailed description for the fragment programs. What happened?

Originally posted by LordKronos:
In thinking more about it, I am actually starting to realize that perhaps the largest benefit of Cg is that you can write shaders in a high level language. I know that’s pretty obvious, and its something that nvidia is pointing out, but I guess it didnt sink in for me because I personally am pretty comfortable with things like assembly language. Writing shaders in assembly or using register combiners is not a hold up for me. However, for a lot of programmers, it is. I always thought register combiners were perfectly fine, but in speaking to some nvidia guys a few years ago they said their biggest complaint was that a lot of developers were having trouble learning or getting comfortable with combiners. Same thing happens in assembly, a lot of programmers dont get the concept of a limited number of registers. They have X number of registers, but need 5x number of temporary variables. Sharing the registers among their “variables” just doesnt click with them. Being able to program in a high level language will let a lot more poeple do it.

i thought the main benifit with a high level language is ‘speed benifits’ eg with cpus interesting to note a lot of the assembly ‘tricks’ from a couple of years ago actually run slower than the C version written then + compiled today (ie the asm is set in stone the higher level language aint)
multiple that by about a factor of 5 (about the rate graphics hardware seems to advancing compared to cpu’s )

btw ive said this at least 10x in the last couple of years dont bother learning pixelshaders cause the syntax will be soon superceeded, well heres further proff.

Wait a minute. I dont want that to happen…it makes my skills LESS valuable

im right behind u ehor

edit- i agree with davepermen (+ others) this being a ‘standard’ is a bad idea

BIG QUESTION how much input from sources outside of nvidia went into the design of this?

also something noones mentioned i dont think ms will be to happen about this wrt d3d, i know relations between ms + nvidia aint been going to good recently for a while (perhaps because of the xbox failure?)

to lighten the topic i just madeup a joke

Q whats the difference between the xbox + the dodo?
A the dodo managed to hold out for a few years

ok ok feel free to shoot me (offer only open for 24 hours)

[This message has been edited by zed (edited 06-13-2002).]

BIG QUESTION how much input from sources outside of nvidia went into the design of this?

also something noones mentioned i dont think ms will be to happen about this wrt d3d, i know relations between ms + nvidia aint been going to good recently for a while

AFAIK, it was a joint development from Nvidia and M$.

edit- i agree with davepermen (+ others) this being a ‘standard’ is a bad idea

Why? Standards are good. Ppl seem to imply that NV is trying to steal ppl away from GL2’s shading system. It isn’t. Cg allows you to create a single “shader”, that compiles down to GL2, GL1.4, DX8, DX9, and across multiple architectures. simply by chaging the profile at compile time. The original source shader stays the same. Not all hardware is going to able to support GL2 completely. Cg allows you to still utilize a single shading language, for current hardware, that will be mainstream for quite a while.

Nutty

Nutty

Hi

I think I will play with it, perhaps I will use it for my game,where I’m writing at the time the base.It would be great if ATI write ASAP a profile so I could rise the minimal graphics card to a GF3/Radeon 8500 without writing mutiple code pathes.

As mentioned by others before I’m missing a profile for Register combiners/texture shaders?(I could write a pixel shader and pass it to nvparse, but that is not sense of the exercise). Perhaps they should make the profile system open source so others[not only IHV’s] can write a backend for their purpouses.
Perhaps it is a little bit silly, but what about writing a fragment backend for ARB_texture_env_combine so you could write a shader that makes TNT2/Rage users happy ?

Bye
ScottManDeath

This does mean that you have to compile your shader for each target graphics card, let alone for each platform (Win32, Linux, Mac etc). You’d also have to compile for nVidia, ATI, Matrox, 3Dlabs and any other new vendors on the horizon, for each card model which you intend to support.

OpenGL 2.0 sounds the way to go in the future.

There are other high level shading languages out there. Take your pick.

No, you compile your shader at run-time, not development time.

You can compile at development time if you wish, but you then have to compile for all the various hardware like you said.

Run-time compilation is suggested as the preferred method, then you dont have this problem. Also when new hardware ships, your shader will automatically compile to the new hardware, instead of being hard-coded into using older features.

Nutty

I’ve been playing with this new Cg stuff for a while now and all i have to say is GOOD JOB NVIDIA!! I think it’s great. I mean writing the vertex programs in the assembly and stuff wasn’t to bad, i did like it a lot. But heck, now with Cg, i like writing shaders even more. The only thing that interested me in OpenGL 2.0 was how shaders where written (in a C like language), but now with this, i could care less about OpenGL 2.0. Cg will work from a GF 3 on up RIGHT NOW, and hopefully on other cards (like ATI and stuff) like which is what i think they were also trying to get at with this. It looks like some here are not as excited about it as i am, but i guess you cant please everyone. I am actually suprised some has said negative things about it, this is a pretty darn powerful thing here. Once the next gen cards come out that have a more powerfull programable GPU, this Cg language will be even more awesome.

-SirKnight

Nice to see someone who thinks it’s good!

I really hope ATI, Matrox and 3DLabs get their profiles built for this sharpish. No more writing vast amounts of GPU assembler for every target card out there.

All we need now is a Cg debugger.

Nutty

So the conclusion is, they upgraded nvparse and turned it into big news. Exciting, isn’t it …

I suspect at least some ‘inspiration’ for that has been gleaned from the GL2 shader compiler sources.

[flame bait]
They’d better fix their ‘fragment shaders’. If they had an interface that could come even close to ATi’s, there wouldn’t be any need for this anyway.
[/flame bait]

OMG, I just found out about this on another site…and just yesterday I was talking about C style shader programming and BAM! They release this! I’M VERY VERY EXCITED!!!

If any of you are interested, I have put up a resource section at Spider3D just for this sort of thing: http://www.spider3d.com/html/resources__.html

I can’t wait to try out this new vertex program generator LOL

Well, i like it, but the shader that will come with ogl2.0 is also ment to be able to compile to dx… (they announced a ‘competition’ about that on the suggestions forum a while ago)

it had been better to use that syntax to the max instead of a new one…

All we need now is a Cg debugger.

YES! That is exactly what we need now. I have always wanted a debugger for vertex_programs and the register_combiners but now with Cg, I think a debugger is more likely.

So the conclusion is, they upgraded nvparse and turned it into big news. Exciting, isn’t it …

Um…no. Cg is much more than an “upgraded nvparse.”

-SirKnight

Hmmm…I’m not sure if this would be possible but it would be pretty cool if we could somehow use the VC++ debugger with Cg. But then again, I dont know if that’s possible or even if it was, how it would work out.

-SirKnight

That would be VERY cool SirKnight!

I thought the nvbrowser(or whatever it’s called) had debugging capabilities.

So how useful do you think Cg will be anyway? Does this mean you can write something in a couple of lines instead of 50?

I will have to download and try this one out for sure. Looks very atractive.

Also, I think the idea is pretty obvious. Being API, GPU independent and being free of extensions. gl2 is suppose to solve the later 2 but being API free is kind of new.

V-man

Where’s Ati with their plugin? Or are radeons 8500 extreme rarity around here

Originally posted by JD:
[b]Where’s Ati with their plugin? Or are radeons 8500 extreme rarity around here

[/b]
Present!

LOL

Originally posted by GeLeTo:
What bothers me is this quote posted on the cgshaders forum:
“Nvidia agrees strongly with advancing OpenGL, but thinks the Cg approach is better. We need to advance the existing OpenGL, not create a radical new OpenGL.”

Here’s a link to where this quote probably came from:
http://www.extremetech.com/article/0,3396,apn=8&s=1017&a=28051&app=6&ap=7,00.asp

NVIDIA don’t like OpenGL 2.0? I don’t like the way Kurt talks about OpenGL 1.4 and OpenGL 1.5 like they are official releases. Am I wrong in thinking that they are just NVIDIA releases or is the ARB really moving to progress the 1.x line? IMHO effort should be targeted towards OpenGL 2.0 rather than 1.4,5,6 etc. I really liked the way OpenGL 2.0 was progressing and believed that was the future. Of course, the interview may not express official NVIDIA views but I’m guessing it probably does.

Don’t get me wrong, I really like NVIDIA products and especially developer support and I will certainly play around with Cg. However, after reading all of the (numerous) articles and discussion online, I have a feeling of unease. I think I would have preferred Cg was never released - I was happy to wait for OpenGL 2.0.

Anyway, the future’s looking interesting!

Originally posted by SirKnight:
YES! That is exactly what we need now. I have always wanted a debugger for vertex_programs and the register_combiners but now with Cg, I think a debugger is more likely.

nvparse debugged vertexprograms and registercombiners…

Um…no. Cg is much more than an “upgraded nvparse.”

why? its a highlevel runtime shader compilers for currently nvidia-only hardware. soon for all gl1.4 hw and afterwards for the rest.
but nvparse did the same, just less “complete”.
(that doesn’t want to bitch it down, its a great nvparse )

to all. just one thing about the runtime shader compiler.
do you really want as a game developer that all your sources are opensource? if you compile at runtime everyone can read your whole gpu-part of the engine. while it would be great to see the source of a doom3 for example, i dunno how you would like it that your shaders get copied around everywhere… you created a famous effect, and before you can say “piepiep” everyone uses it with no effort himself.
i’m not against opensource but well… its just some idea…

another thing.
nvidia should learn how to create small stuff… 80mb this time… how fun with 56k…