Geforce/gts Specific Programming??

I’ve always wondered about this. Nvidia has the geforce card, and now they have the GTS.

These cards are suppose to do some kind of texture and lighting acceleration.

How do we make use of these features? Is it done through opengl? Or do we need to get some kind of developer api from Nvidia?

Thanks for any info.

Look at this place: NVIDIA’s Developer Relations Site .
You’ll find a lot of docs, demos, etc…
Check all presentations and whitepapers - it’s very interesting and much easier to understand (in comparison with short and arid description of the same OpenGL extensions).

And for some examples check this Topic: Register Combiners + bump mapping

Ah, thanks Serge. I found the site last night. That’s definately a start.

Thanks again.

As long as you have proper driver, all you have to do is use OpenGL’s native functions and everything that can be, will be accelerated through hardware. As long as you don’t do your own lightning/transformation that is.

No need for extra APIs.

Isn’t there some openGL extensions that you need to use? I hear that a developer has to “support” the new features like the T&L on the GeForce, so they must have to do something extra? It can’t all be automatic?

The way I think of it, nVidia has added some specialized hardware acceleration, and in order to use it you have to use some kind of extension to openGL, that they probably provide. Maybe I wrong - I hope I am, that means it’s easier to code.

As Bob said, You dont need to do anything special to get transform and lightning accelerated with a graphic card that supports it, like geforce.

You certainly do not need to use any extensions…

What might be confusing is the hype of a T&L game, it sounds like they have “enabled” it sometimes!.. But what they really have done is taken advantage of the T&L feature by having more complex geometry… more polygons in the level…


Ahh… Yeah, I got the impression that you had to enable the T&L computation for it to work. Well, then that’s actually even cooler then.

Thanks to all. I welcome any further info, but I think my question has been answered.

Following On, on the subject…

I’ve been trying to compile a small demo program using the multi-texturing units on a Geforce 2, but each time I get a compilation error ‘unresolved external _glMultiTexCoord2f()’, could someone give me some guidelines as to what I’m doing wrong…


Originally posted by srhadden1:
I got the impression that you had to enable the T&L computation for it to work.

You were probably reading about a Direct3D game - D3D didn’t allow for T&L acceleration in its original pipeline model, so a D3D app DOES have to explicitly “enable” T&L.


Sounds like a linking error to me. You’re sure you have the right libs to link with?


If no special code is needed to make use of T&L, then wouldn’t Quake 2 and 3 be much faster on a GeForce? Are they? I don’t know?

There are some features on the new GTS that do need special GL extentions, but they relate to per-pixel lighting I think

Rob (and Sjonny too): Multitexturing functions is not found in any library file, since they are implemention-specific. You must load them yourself, using wglGetProcAdress(char *ext), where ext is the name of the function you want to load. It can seems to be a tad difficult, but is actually not that hard.

Here are a way to go:
Go to nVidia and get a headerfile with all extensions (works fine for all manufacturers).
Include the file and declare functionpointers.
Assign each functionpointer a value.




Now you can call glMultiTexCoord2fARB.

PhilipT: Yepp, Q3A (aswell as any OpenGL-based game) will run (ALOT) faster with a GeForce. Why? Because Q3A calls native OpenGL-functions, and all OpenGL-functions is loaded (when you start Windows) from the current driver you have installed. If your drivers can perform function X in hardware, each call to X will be performed in hardware. If your driver does not support hardware, X will be run in software (since this is how X is written by the manufacturer).