Methods for getting dedicated VRam size.

Alright. After days of trying, I’d like to ask for practical methods of getting dedicated VRam programmatically (C++ preferred).

The only way that works for us so far (most times) is to use IDxDiagContainer’s “szDisplayMemoryEnglish” property. But it doesn’t work sometimes. For example, it gives total vram (dedicated + shared) as dedicated for some cards (cannot remember the exact model). And just today, it gives “N/A” for an Intel 82915G chip which should have 128MB VRam.

Suggestions/ideas of any kind will be appreciated:)

For such an intel chip, I thought it actually have no dedicated memory, it is all taken dynamically from main RAM.

Ask yourself why do you need to know the amount of VRAM : to size the textures so that performance is acceptable ?
In this case I believe the best is to benchmark, for each texture size, and through dichotomy select the highest texture level that still keeps acceptable performance (and no GL error).

  • For Windows, I recommend reading the following document to know what is reported by the OS as graphics memory. It depends both on discrete vs integrated graphics adapter and Vista vs before Vista:

http://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/GraphicsMemory.doc

Also, take a look at the VideoMemory Sample in the DirecX SDK:
http://msdn.microsoft.com/en-us/library/cc308070(VS.85).aspx

At least on Windows you are lucky, there is a OS API to query such things.

  • On Mac, you are lucky too, the Core Graphics API and IOKit API allow you to query such information. See sample code here:

http://developer.apple.com/qa/qa2004/qa1168.html

  • On Linux, the only thing I found is nVidia specific with the NV-CONTROL X server extension and the NV_CTRL_VIDEO_RAM attribute. The API (NVCtrlLib.h and NVCtrl.h) is provided in nvidia-settings

ftp://download.nvidia.com/XFree86/nvidia-settings/

On nvidia cards you can use NVAPI, not sure if you can get this particular value though.

Thanks ZbuffeR.

You got it. That is what we are doing now, profiling on application launch.

i remember a sample from Nvidia giving video ram size like MalcolmB posted above. also i thought one can do this with allocating textures taking GL_MAX_TEXTURE_SIZE doing it in a loop till it fails would give some hints on the video memory (free textures when done). would be far from accurate but something. if textures are a problem its better to use texture compression. the visual quality is then another issue.

Generally that doesn’t work AFAIK as OpenGL keeps allocating in CPU space…

In fact, there are 5 methods mentioned in http://msdn.microsoft.com/en-us/library/cc308070(VS.85).aspx (Thanks to “overlay”).
GetVideoMemoryViaDirectDraw;
GetVideoMemoryViaWMI;
GetVideoMemoryViaDxDiag;
GetVideoMemoryViaD3D9;
GetVideoMemoryViaDXGI.