Determining Hardware OpenGL Capabilities

I am writing an OpenGL application for Windows XP and Window 2000 using VC++.

I am trying to find a way for my program to determine (during run time) whether the graphics card supports OpenGL hardware acceleration, and how much RAM is installed on the graphics card.

Is there a better way than having to perform some rendering benchmarks at run time to determine performance with various window/resolution sizes?

In searching the forums, I’ve found several posts saying I shouldn’t need to know whether HW is supported or how much RAM is on the HW. However when a user of my program calls and says its much slower in full screen mode - it is because they usually have old an graphics card or more likely not enough VRAM to support full screen HW acceleration. So I would like to find a way to do this as a support tool for users of my OpenGL app.

Any solutions or ideas?

Thanks in advance,

[This message has been edited by robosport (edited 08-25-2003).]

I did this a few weeks ago, and found alot about it. Use glGetString to see it


If you have something like “Microsoft” that means no hw acceleration. If you see ATI, NVIDIA, 3dfx Interactive or whatever then you have OpenGL HW acceleration.

If your hw supports OpenGL the hw acceleration is automaticly selected.

[This message has been edited by GL_ZERO (edited 08-25-2003).]

Thanks GL_ZERO for the quick response. That solution answers part of my question. It tells me when I definatley do not have HW acceleration (when Microsoft Generic is found in the string). Doesn’t tell me for sure it is HW accelerated. HW vendor string could still fall back to software if I select a rendering option not supported in HW.

Still looking for a way to determine how much VRAM is on the card.


Sorry, there’s no way in OpenGL to do what you want. You’ll just have to trust the driver to do the right thing.

Yes, al_bob I’ve sen some of your previous posts on the topic. Thank you for the response.

Unfortunaltey, my post is not related to a driver or run-time performance optimization issue.

I’m trying to make a tool that will let users of a program know if they should upgrade their graphics card to get optimized performance.

If they only have 16MB VRAM (even on a relatively recent chipset) and they are trying to render full screen on a 1600x1200 display, then it doesn’t matter what the driver does, the render will fall back to software and frame rate will be slow.

Just looking for ideas (not necessarily pure OpenGL) on how to identify poor performing configs, that can be incorporated into a run-time test tool.


[This message has been edited by robosport (edited 08-25-2003).]