I am using a 1.2 header file from nVidia. My driver supports the ARB_imaging extension to 1.2.
Problem is that when I wglGetProcAddress() glBlendColor, I seem to be kicked into software rendering.
Calls to glGetVersion, and glGetRenderer, etc., still tell me I’m using my TNT2, but performance is like 0.2 fps.
Does anybody know why this is??
Those fonctions are not implemented in hardware. they are implemented in software.
It is not because a card support an extension that they have hardware support for it.
TNT2 does not support the blend equation or constant color features in hardware, whereas GF does. You may have noticed, for example, that we support EXT_blend_color, EXT_blend_minmax, and EXT_blend_subtract on GF, but not on TNT2.
wglGetProcAddress itself has no impact on performance. However, the rendering state that you use does.
Ouch. That sucks.
I never noticed the difference in the extensions because I don’t own nor do I have access to an GF…
I assumed that since the TNT2 driver supports ARB_imaging that it would support BlendColor… Guess not.
Hi! Thanks for that post, I had the same problem with glBlendColor. I tought my TNT2 supported glBlendColor, but it doesnt! Where can I find infos on what extensions are supported by some video card ?!?
There is a document on nvidia’s developer web site (probably under opengl sdk) called nvOpenGLSpecs.pdf. It has a big table of supported extensions on all their cards.
But that table can be a bit misleading if taken purely at face value, as it doesn’t always clearly state if a given extension is supported by hardware or if it is software only. A good example is the imaging ‘extension’ as listed on the table. The TNT is marked as supporting it, but it does not say that it is through software only.