How to query the total amount of video memory?

excuse me …GL_RGB16 only 128M…

Though you might be able to use other formats to consume more memory, you will never find a format that takes ALL of the VRAM b/c you MUST have room for the framebuffer at least. So creating the largest memory consuming texture you can still won’t give much of a clue to the VRAM a user’s card has. :slight_smile:

-SirKnight

Ok, I’m back from bed, thanks for the attention.

Basically, what we want to do with the detected number is just making some very basic assumtions and adjusting texture/geometry Lod based on it. I don’t think we’ll depend on the number so seriously that every bit of the VRAM is used.

Speedy, do you know where I can find info about the Entech library? Looks like it’s a commerical product, right?

Basically, what we want to do with the detected number is just making some very basic assumtions and adjusting texture/geometry Lod based on it. I don’t think we’ll depend on the number so seriously that every bit of the VRAM is used.
Yes, but, as I pointed out, making any assumptions based on it is not wise. Or, at least, not future proof.

You could just ask the user to select an appropriate memory size. NWN did it like that, and I prefer it that way. You don’t confuse a user by asking about “detail levels”; you just ask him what his card has in it.

Knowing the amount of video memory can be useful to set default settings, I don’t know why some guys here are so strictly against it. And even Doom III actually detects the amount of memory of your graphic card for its defaults. I agree that it should only used as a clue and not as divine truth though :wink: and that the user should be able to alter the settings at will.

Just use DirectX to query the amount of video mem if you need it at the start of your application.

Why not upload 1 mb textures one at a time using glPrioritizeTextures and checking glAreTexturesResident after each upload? Or am I missing something here?

There’s no guarantee that glAreTexturesResident returns useful info. It might always return true. Besides, 3dLabs chipsets have a form of virtual memory where only the texels needed for rendering are paged in so on their cards, results would be meaningless as well.

if youre using sdl u can use the following

const SDL_VideoInfo* info = NULL;
info = SDL_GetVideoInfo( );

info->video_mem

im not 100% sure what it returns (does anyone know? whos delved into the sdl code)

Well, most games I know of put all resources into vram for speed reasons. I would just target one memory size ie. 64mb for example and design my world maps to fit into that limit. It’s much easier to deal with all things related to game dev when you target just a small set of capabilities otherwise the system complexity can get out of hand. This goes for system memory as well.

Err, does that apply to PS2 games too?!
I think you’re mixing up video memory with system memory - most games target a specific system memory quantity, not video memory. A minimum video memory would be specified in order to hold the amount of the world visible in a single frame, not the whole world itself.

World is divided into maps or levels. At beginning of play of such level, all textures of that level are loaded into vram. When level change occurs, existing textures are offloaded from the vram and new set of textures are loaded into it. This will prevent the card from sourcing textures from the agp or worse system ram. The system ram is another issue but closely related to gfx. You don’t want to go to disk to load objects into game. You want them to be loaded into sys. ram first then made avail. fast for that particular level gamer is playing ofcourse. You don’t load the whole world ie. all the maps into the memory since you don’t play all maps all the time, just some maps or one map.

For non-PC games the computer architexture is built differently that the latency that you see in a PC game isn’t visible. That’s why you can stream data off the cdrom easily in playstation for example and if you did the same in PC cdrom not only does it not spin all the time to further reduce latency but there is a major problem with the ata speeds and trying to keep the cpu from starving. Two different architectures. But who cares about console games around here? I don’t and most don’t either.

Originally posted by JD:
This will prevent the card from sourcing textures from the agp or worse system ram.
What do you think AGP was invented for? To speed up load times between levels in games?! It is, of course, perfectly acceptable to page textures into video memory from AGP memory during play. What you don’t want is to be doing this several times while rendering a single frame. I haven’t a clue where you got your information about common practice from.

The system ram is another issue but closely related to gfx. You don’t want to go to disk to load objects into game. You want them to be loaded into sys. ram first then made avail. fast for that particular level gamer is playing ofcourse.
That’s awfully considerate of you - I’m sure the player appreciates that, while the graphics are crap, at least there’s no load times. Worth every penny of the £30. You do, of course, want to go to disk to load objects into your game, between things called ‘levels’.


You don’t load the whole world ie. all the maps into the memory since you don’t play all maps all the time, just some maps or one map.

Which game are you talking about? I get the impression you’re talking about one specific game you’ve played, and not the wide variety of game genres there are sloshing around these days.

For non-PC games the computer architexture is built differently that the latency that you see in a PC game isn’t visible.
A unified memory model does make things easier, yes. Texture updates no longer have to considered so carefully, as there’s virtually no cost…but that’s mainly compared to system memory-to-video memory uploads, not so much AGP to video memory.