Utilizing Graphics Card

I purchased a graphics card that supposed to support OpenGL. It’s not anything high end like a Radeon, but it has texture mapping, lighting, and geometry capabilites to name a few. However, after sucessfully installing the card and running my program, I noticed no increase in the speed of my application.

Am I supposed to write special code to take advantage of the card’s capabilities, or is it supposed to enhance my application on its own? If I do have to write special code, how do I accomplish this?


First of all, what kind of card is it, specifically? That will help us determine if the card is at fault.

Make sure to go to your card manufacturer’s web site and download the latest drivers.

After that, make sure you are using a pixel format the card supports. Unfortunately, there is no simple function to figure out what formats are accelerated. You can, hopefully, go to the developer’s website and see if they have any information on OpenGL programming. If it’s not nVidia or ATI, they probably don’t.

So, if they don’t have anything on their site, just use a 16-bit (probably 5, 6, 5) color buffer with a 16-bit z-buffer. The other buffer parameters should be 0. If your card supports OpenGL at all, it should accelerate a simple format like that.

If that still doesn’t get you accelerated, then your card likely doesn’t accelerate OpenGL (or, at least, it doesn’t get any greater performance than software rendering on your machine).