Nvidia Cg toolkit

so. now i’m allowed to put some statements.
i’ve read now the 150page spec.
never thougth i can read through 150 pages that fast…
have to admit, i like the design. its quite okay. i would prefer if they would support the gl2.0 language directly and make a compiler for vp instead of writing an own one, but well…

other thing i would love:
programable pixelshader in gl on nv10 hw… because rc’s could be used there anyways and it would be nice if they would adobt this. oh, and the nv10 vp could be much bether than the nv20 vp, as it is in software it could support branching as well, and huge arrays for doing texturelookups and and and oh no, thats matrox, isn’t it?

well then…
will there be a fragmentToPixel program as well? programable blending unit would be sweet, including stenciling, depthtesting etc… i know there is not much programability there, but its enough for supporting it. (its about the same as the texture_shader “programability” so what?)

anyone having a cheap gf3 around?

if you compile at runtime everyone can read your whole gpu-part of the engine.

Theres nothing stopping you encrypting your shader files, and decrypting them on load.

anyone having a cheap gf3 around?

I’ll sell you my GF4, when Nv30 comes out

Nutty

yeah, encripting and decripting… would be sweet to compile down to a intermediatelanguage, as java, as .net, as gl2.0 does, wich then can be sent… so i only have to store that binary intermediatelanguage…

anyways, its not important for me, as i’m not a big payed gamecompany… if someone steals my stuff that could only be good (say if carmack does )

when you get your nv30 i dont need a gf4 anymore cause then i have a) an r300 or b) an nv30 or c) something else myself…

i just need something for this bridge…
nvidia. if i say i’ll buy a nv30, would you spend me a gf3 till then?

>>nvidia should learn how to create small stuff… 80mb this time… how fun with 56k…<<

i should email this to nvidia but it tends to fall on deaf ears there,

but anyways i agree with dave 80mb (whats the date today june14th hopefully i will have it download by the end of the month)
2 suggestions

A/ break the file up into smaller pieces 20mb is ok (getting to the limit though)
B/ i assume it uses zip, use some other better compression method eg bz2 then i will only have to download 60-65mb, when is zip gonna die

whats the use of zipping fat stuff? its fat anyways, even while not that fat anymore.
bether they should try to not blow up their stuff… all those sdk’s and such are just plain stupid imho. they could step back and code demos, as they did if you look at the stuff of 2000 and so. that was fine. a demo with 100k, one with 5mb (because of the models and textures) etc…

what we need in the cg toolkit is a .lib, a .h and the installationfile. that should fit in a mega or two. we need the documentation wich is 1mb, and the helpfiles wich should not expand one or two mega and could even be online html. the demos should not use fat textures and only some models, that means the datapackage is another 5mb possibly, and the demos itself can be small exes and small sources

it should fit in 20mb the whole stuff. at most.

i dunno as i havent downloaded. we’ll see what there is all in…

EDIT: actually it takes even long to download here in the company, where we have a quite fast connection… longer than downloading the old demos with a modem… thats the part of evolution i hate… we have slow connections => all try to make small stuff… connections get faster => stuff gets faster. in the end, no difference…
stop that please!
i have ice age on 200mb currently, a whole 80min movie. i dont want to have a simple compiler in the same size… (and yes it looks quite okay, VHS quality it is, and yes i know its illegal but a) i watched it in cinema yet and b) i’ll buy the dvd when its out, so what?)

[This message has been edited by davepermen (edited 06-14-2002).]

Originally posted by davepermen:
do you really want as a game developer that all your sources are opensource? if you compile at runtime everyone can read your whole gpu-part of the engine.

Ever heard of GL trace? With a little bit of extension, it would be completely trivial to upgrade GL trace to intercept the commands that feed the shaders into the driver. Thats all you need to find out what commands are going in. Sure, they may only get the low level version of the code instead of the Cg version, but its not like that has ever stopped people/companies from reverse engineering before. And as someone else said, you can encrypt the code too. If someone is clever enough decrypt your source, they are probably clever enough to reverse engineer the compiled shader anyway.

how about some other compilationplatform?

currently there are
glvertexprogram
dxvertexshader
dxpixelshader

soon there will be
glpixelprogram or so

then there will be gl1.4vertex and pixelshader

then dx9 vs and ps.

then gl2.0

well… if we are on the way, i would like to have a next one:

cpu

you define an in-buffer, and an out-buffer, and with this language you can process vertex-data. that way cg would expose to be a vertex-c-language for streaming data processing in general meaning.
it would be too cool…

but i think thats too much for nvidia, not? first get it working on other gpus…

Originally posted by LordKronos:
Ever heard of GL trace? With a little bit of extension, it would be completely trivial to upgrade GL trace to intercept the commands that feed the shaders into the driver. Thats all you need to find out what commands are going in. Sure, they may only get the low level version of the code instead of the Cg version, but its not like that has ever stopped people/companies from reverse engineering before. And as someone else said, you can encrypt the code too. If someone is clever enough decrypt your source, they are probably clever enough to reverse engineer the compiled shader anyway.

yeah sure, but i dont see THAT often stuff reverse enignered on programs (and you can do it here as well, and if there are .dll’s its even quite simple to find the general structures due to named functions). it would save from the ones that are more newbieish and just want to rip stuff to look cool (newbies always want code from us, if they could get it for free they would take it and dont look at any licence…)

http://developer.nvidia.com/dev_content/cg/cg_examples/pages/soft_stencil_shadows.htm

VERY sweet. but not in the 80mb file.

API independent shader language is cool, but it would be nice if the shader language is integrated with the API.
I’d hate to see very less people using OpenGL 2.0’s Shader Language.

Can someone please tell me if I can run an opengl vertex program or the Cg on a TNT?(yeah there still are people using a TNT …in software mode.

if you have NV_vertex_program in, yes you can (easy to check, not?)

Originally posted by davepermen:
[b]how about some other compilationplatform?

currently there are
glvertexprogram
dxvertexshader
dxpixelshader

soon there will be
glpixelprogram or so
[/b]

Talking about Cg 2.0 (this was asked before in this thread)…

With vertex and pixel shaders (programs) we are not quite there yet. The next big thing will probably be primitive programs, i.e., things like freely programmable NURBS or subdivision tesselators.

If anyone is wondering how they whipped all of this stuff out of the blue, take a look at http://graphics.stanford.edu/projects/shading/. According to Kurt Akeley, Bill Mark is one of the developers of Cg. Guess who’s one of the chief developers of the Stanford shaders

[b]

cpu

you define an in-buffer, and an out-buffer, and with this language you can process vertex-data. that way cg would expose to be a vertex-c-language for streaming data processing in general meaning.
it would be too cool…

but i think thats too much for nvidia, not? first get it working on other gpus…
[/b]

In a way, that’s what’s going to happen. In his invited talk at GI2002 David Kirk said the future lies in “stream processors”. That’s a little different from current CPU’s because they waste a lot of space (>70% ?)with cache memory. With GPUs, you can use all of the silicon for computation if you adhere to the concept of freely programmable “stream processors” with a certain input and output bandwidth, and load balancing to keep all the processors happy…

Michael

http://developer.nvidia.com/dev_content/cg/cg_examples/pages/soft_stencil_shadows.htm
VERY sweet. but not in the 80mb file.

Dunno what 80 meg file you’re looking at, but the 80meg toolkit I downloaded did have that demo in. It’s in the Cg browser.

Considering most of this stuff is targeted at game developers, they’re prolly not bothered about ppl whinging it takes along time to download on a 56k modem. I dont know of any games company that doesn’t have a fat pipe. Buy broadband dave! In England you can get broadband for cheaper than flat-rate dial-up!!

Nutty

stop bitching personally rich okay? not in the forums. come online then

i will have broadband soon
anyways its stupid to download such files if you could realtime run the setup with broadband instead. even on broadband it takes time to download that. thats not why i want broadband. not to just get 10 times bigger files than before broadband was standart. really not.

EDIT: thanks anyways, i’ve found it now

EDIT 2: and now? where is the SOURCE of the whole? i dont need the shadow-volume expansion, i want to see how they do the soft shadow part… THAT is not in, or is it, richy?

[This message has been edited by davepermen (edited 06-14-2002).]

[This message has been edited by davepermen (edited 06-14-2002).]

CgToolkit\Direct3D\DX8\src\demos_CG\SoftShadows1\

BTW: did you noticed these?
CgToolkit\OpenGL\lib\Debug\RegComParser.lib
CgToolkit\OpenGL\lib\Debug\TextureShaderParser.lib

Originally posted by Carmacksutra:

CgToolkit\Direct3D\DX8\src\demos_CG\SoftShadows1

thanks


BTW: did you noticed these?
CgToolkit\OpenGL\lib\Debug\RegComParser.lib
CgToolkit\OpenGL\lib\Debug\TextureShaderParser.lib

i think those are the nvparse thingies, not?

I was reading the page where they list all the companies that support Cg, and I was suprised to not find Epic nor Id Software.

Interesting.

http://www.theregister.co.uk/content/54/25732.html
commentary taken from ‘the register’
written by someone who works on competing technology thus should be taken with a grain of salt

>>No break, continue, goto, switch, case, default<<

huh! what a failing
cg hasnt even planned for next years (or even this years hardware)
i have a feeling cg will be updated every 6 months

to use it u will be forced to write
if ( cg_version == 2 )

else if ( cg_version == 3 )

else // version 1

an article titled : Why Nvidia’s Cg won’t work
for me, cg main problem is ogl2.0 since both will fight on the same ogl field, but andrew richards has other serious arguments to put in the balance…

[This message has been edited by haust (edited 06-14-2002).]

Originally posted by zed:
[b]cg hasnt even planned for next years (or even this years hardware)
i have a feeling cg will be updated every 6 months

to use it u will be forced to write
if ( cg_version == 2 )

else if ( cg_version == 3 )

else // version 1
…[/b]

what do you except? as long as the hardware vendors dont find to a general solution on how to implement the shaders in HARDWARE there will be no solution in SOFTWARE to support it. there has to be a final x86 spec for gpu’s, before that, there will be no real shaderlanguage. thats why i prefer gl2. it does define THAT DO WE NEED FINALPOINT. not important what now is out and what not. they SET a standart wich is not yet here, but now everyone works for getting to this standart. cg on the other hand wants to build a standart around current existing hardware, wich is a fine thing. but its not at all a holy grail, else no one would care about gl anymore and we would all stick at dx. because dx in fact does the same. every version they set the standarts and everyone tries to support them. the results are caps and versions for each gpu. the fixed function pipeline is standartised very well. thats why it works everywhere the same (sure, there ARE exts but not really that much)

i just want this for shaders.
we’ll see the future. at least, we soon have a general vertexshader in gl. took quite long for the arb

oh, and i dont like the idea at all to have shaders/scripts/strings to set up the gpu. why? because in the end i plan to use the gpu as a general streaming processor for my own stuff… for this i want some asm or function-interface. its just much more handy than doing such realtimecompilations. a highlevel language around it? no problem. but i want a VERY BASIC base interface. means functions to set it up (sort of what ati did). the base has to stay lowlevel. thats my thought.

(and if you have functional setup, you can generate with generic/metaprogramming a nice interface DIRECTLY in c++ in. in the form of

VSvertex vpos = vsGetInput(GL_VERTEX_ARRAY);
VSvertex nrml = vsGetInput(GL_NORMAL_ARRAY);
VSmatrix to_screen = vsGetInput(GL_MVP_EXT);
VSvertex opos = vsGenerate(GL_VERTEX_VARIABLE);
opos = to_screen * vpos;

in such a way…

now THAT is a highlevel interface. and the general c++ compiler gets this down to the simple functioncalls…

am i only dreaming or is this burning an eternal flame? close your eyes… give me your hand darling do you see my heart bleeding, do you understand? do you feel the same

hm… this just went now trough my head. i really need to go to bed

bye