You can’t know. Even if you measure when you start up, the user could change display mode from 16 bit to 32, or dynamically enable a second screen (at 1600x1200 or bigger), or do one of a million other things.
I’d provide a slider saying “faster <–> prettier” and let the user decide. Default to the middle.
A laptop (or other machine) running an internal and external screen, each at 1600x1200, with unified back and depth buffers, at 32 bpp, will take at least this much graphics memory:
3200x1200x4x3 == 46080000 bytes. That’s 43.9 MB, just for frame buffers.
I’d provide a slider saying “faster <–> prettier” and let the user decide. Default to the middle.
I don’t necessarily agree with this. The user should have some say in different details and effects: texture quality, shadows, number of real lights, etc. I do agree, however, that the program should not try to balance these bars based on the size of video memory.
If you want to auto-fit your graphical effects, do it a better & more reliable way: profile the user’s machine at boot-up time (or give him a button to push to auto-profile). Run a series of tests to see how much texture resolution/other effects the user’s machine can take.
Well the problem is that I want to allocate a unique VBO that wil fit on the VRAM without disturbing too much textures (let say 16Mo on a 64Mo board, on a 16Mo board I will allocate it on the AGP mem). It size is important because all the static geometry on a view must fit on this VBO (if not lod is applied). That’s because all the scene geometry will not fit the VRAM, and I don’t want to waste system mem (I can have >50Mo of geometry loaded so having it twice isn’t so good).