Texture memory limit

I am working with a graphic card with 768 megabytes. I wish to know what part of this memory is used to store the texture information. The scene i am loading has almost 300 megabytes texture memory. I store all the textures in memory, but when I render some textures, the application performance falls down. I suppose that this textures has been stored in RAM memory instead of video memory. Is there any tool or function to inform you about the amount of video meory you are using and how any memory is still left?
Thank you

Hi M27, I never used it myself but you should be able to check whether Textures are resident (high performance) or not:

boolean AreTexturesResident( sizei n, uint *textures, boolean *residences );

Cheers,
N.

First make sure you really have 300MB of textures (I’m talking about texture formats and other factors here).

Do you use driver’s default texture quality (GL_RGB) or dy you enforce certain internal texture format (GL_RGB4, GL_RGB8, compressed)?

Perhaps some textures are larger therefore less cache-effective.

Do you sort by material? Sometimes enabling more textures in application can lead to too many excessive texture / state switches.

How many and how large render targets do you have?

Shaders and vertex arrays usually take less space, but keep in mind that’s at least a few MB, too.

Are there any other applications running in the background than may be using GPU memory?

I am using tha textures I am provided. Normally are compressed textures dds formats. The performance fall back is not due to texture / state switches. I have test the application reducing the texture dimensions and everything works right, and in this case i do not reduce the texture/ state switches.
Thank you for your reply.

Try checking out what is going on with NVPerfKit for NVidia or gDebugger for ATI. Latest AMD/ATI performance related OpenGL extension might be worth checking out, too.

8800GTX/Ultra?

The scene i am loading has almost 300 megabytes texture memory. I store all the textures in memory, but when I render some textures, the application performance falls down. I suppose that this textures has been stored in RAM memory instead of video memory.

Probably right. My experience with the NVidia driver is you have to pre-render with your textures to force them onto the card to avoid first-render frame rate glitches in areas of high texture content. If you don’t, I believe some/all live only in CPU memory, so swiveling the camera to areas of high texture/material content can result in lots of behind-the-scenes GL driver CPU->GPU memory paging. Swivel away and back and it’s fine. Is that what you’re seeing?

A technique I used when I had that problem:

I generated a small (32x32) version of each texture.

I render in near-to-far order.

For each mesh I render, I issue a depth count query, which I sample next frame.

The first time a render a model when it comes into view, OR if the model showed less than 20 pixels on the screen, I render with the small texture, else I render with the big texture.

For typical scenes, this spreads out the upload across many frames, and results in much better performance. Also, for objects that are almost totally occluded, I don’t spend texture RAM, which is great for smaller graphics cards.

I don’t know if this will help in your case, but it was surprisingly effective for me.

That is exactly what i am seeing.
When you say pre-render the scene are you saying to render all the scene or just loading all the textures?
I load all the texture at the beginning but when render for the first time some parts of the scene the perfomance drops down?
I am developing now a texture manager to have only in memory the textures that are necessary. Do you think that is a good solution?

Well, just to be clear I said “pre-render with your textures”, not “pre-render the scene”. You need to do some rendering with each texture – the geometry you draw it on is irrelevant.

If you’re in a startup mode, rendering the texture with a full-screen quad works just fine. Though at run-time you might prefer something with less fill. You can probably use a degenerate vertex shader that causes your quad to be totally culled away, consuming no fill and minimal vertex hit.

Keep in mind that the goal here is “not” to render the texture on the screen. The goal is to fake the OpenGL driver into believing you’re ready to render with the texture, so it’ll get off its duff and upload it to GPU memory. Since there’s no way to force that upload via the OpenGL API, and some drivers do this GPU upload lazily out of your control, you have to trick it into uploading it. Don’t like it, but that’s the way it is now.

I am developing now a texture manager to have only in memory the textures that are necessary. Do you think that is a good solution?

Depends on your app, but probably yes. Having one is essential for apps like world-wide flight sim, for instance. Compare your max mem consumption for texture+VBOs+FBOs+systemFB compared to your minimum GPU memory spec. If it’s on the same order or larger, then you probably need one.

I vaguely remember forcing the texture upload by binding the textures to a framebuffer object and enabling/disabling it.

N.